Welcome to History of Data Science. Discover the stories of heroes who transformed our daily lives!

BROUGHT TO YOU BY Dataiku Dataiku

xperiences-ico Xperiences
Filter
Date
Families
GEBRU
Applications / Culture

Timnit Gebru: The Computer Scientist Fighting for a Fairer World

4 min read
08_19_2021
Timnit Gebru (1983) is an Ethiopian computer scientist specializing in algorithmic bias and data mining. She is one of the most high-profile Black women in the space and an influential voice in the emerging field of ethical AI, which aims to identify issues around bias, fairness, and responsibility.

Gebru was born in Addis Ababa, Ethiopia in 1983. In 1999, she moved to the U.S. as a political refugee. She combined a bachelor’s and master’s degree in electrical engineering at Stanford with a job designing circuits and signal processing algorithms for various Apple products including the first iPad. She received her Ph.D. from the Stanford Artificial Intelligence Laboratory.

Her next step? An array of AI conferences, papers, and projects, along with post-doctoral research at Microsoft Research, New York City in the FATE (Fairness Transparency Accountability and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying projects aiming to gain insights from data.

Standing up for Fairness

Next, she joined Google as a research scientist on the ethical AI team to study the implications of AI and improve the ability of technology to do social good. In fall 2020, she was promoted to co-lead work on a paper about the ethical risks around the same language models, essential to understanding the complexity of language in search queries. However, in December, higher Google managers asked her to withdraw the unpublished paper or remove the names of all the Google employees. She refused, demanding to know why — resulting in her controversial resignation.

“There’s a real danger of systematizing the discrimination we have in society [through AI technologies]. What I think we need to do — as we’re moving into this world full of invisible algorithms everywhere — is that we have to be very explicit, or have a disclaimer, about what our error rates are like.”

She also co-led the Gender Shades project, which demonstrated that commercial facial recognition software was more likely to misclassify and was less accurate with darker-skinned females compared to lighter-skinned men. As an advocate for diversity in technology, she also co-founded Black in AI, a community of Black researchers working in AI.