This website uses cookies.

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you’ve provided to them or that they’ve collected from your use of their services.

Welcome to History of Data Science. Discover the stories of heroes who transformed our daily lives!

BROUGHT TO YOU BY Dataiku Dataiku

xperiences-ico The Graphic Novel
Filter
Date
Families
Computer Science / Computer Science Dataiku Favorite

Himabindu Lakkaraju: Artificial Intelligence Accountability Champion

4 min read
08_19_2021
Himabindu “Hima” Lakkaraju is an Indian American computer scientist committed to making AI and machine learning (ML) more accountable and reducing bias. By ensuring people can understand the models used, she aims to facilitate decision-making and increase trust across numerous domains — while better understanding the limits and pushing the boundaries of ML.

Following a master’s degree in computer science from the Indian Institute of Science in Bangalore, Lakkaraju spent two years as a research engineer at IBM Research before moving to Stanford University to pursue a Ph.D. Her focus? Developing interpretable and fair ML models that can complement human decision-making — a thesis that received numerous awards. She also spent a summer at the Data Science for Social Good program at the University of Chicago to co-develop ML models to identify at-risk students and suggest appropriate interventions. 

Increasing Understanding and Trust

Given that trust is essential for humans to accept recommendations produced by algorithms, one of Lakkaraju’s main areas of research is explainable ML. This means creating ML solutions with results that are easy for us to understand and based on clear reasoning. More broadly, it means developing ML models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying ML models in high-stakes decisions from healthcare to criminal justice and education. As a result, she was named as one of the world’s top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.

I develop machine learning tools and techniques which enable human decision makers to make better decisions.

Determined to also make the field of ML more accessible to the public, she co-founded the Trustworthy ML Initiative (TrustML) to lower barriers to entry and promote research on interpretability, fairness, privacy, and robustness of ML models. And, to spread the word about explainable ML, she has created several tutorials and a fully-fledged course.

She is currently an assistant professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University.