Welcome to History of Data Science. Discover the stories of heroes who transformed our daily lives!

BROUGHT TO YOU BY Dataiku Dataiku

xperiences-ico Xperiences
Andrew Gelman: On a Mission to Improve Stats
Machine Learning / Causal ML

Andrew Gelman: On a Mission to Improve Stats

4 min read
Andrew Gelman would be a significant figure in the world of data science even if, like most other statistics professors, he confined himself to the ivory tower and only published works read by other academics. His contributions to Bayesian statistics and Stan, a statistical programming framework, are hugely influential.

However, what really makes Gelman stand out among the world’s number whizzes is the effort he has made to get non-experts to appreciate the power of statistics and the way it can impact the world. He has also become something of a vigilante for better statistical methods in academia, pushing researchers and institutions to hold themselves to higher standards.

Bayes and Stan

Gelman’s career has centered on Bayesian statistics, named in honor of the 18th century English statistician, Thomas Bayes. In contrast to other theories of probability that preceded it, such as the frequentist interpretation, Bayesian statistics seek to acknowledge the uncertainty surrounding certain events when calculating probability. It is much more useful for dealing with complex events that are not repeatable and are shaped by multiple variables.

Throughout his career, Gelman has used Bayesian methods to offer the world insights into data on a range of issues. In 1996, for instance, he co-authored a paper that proposed a better design for physiological pharmacokinetic models, which medical researchers use to describe how a drug moves through a person’s body. The paper, which received an award for “Outstanding Statistical Application” from the American Statistical Association, proposed a model that would better account for the uncertainty created by the many variables that affect how humans respond to a drug.

Gelman has championed integrating statistics with cutting-edge technology. In 1992, he designed a software program, JudgeIt, to help political scientists evaluate U.S. elections and legislative redistricting. Programmed with the results of every Congressional race from 1896 to 1988, the program allowed users to see how different variables, including redrawing the boundaries of a district, would likely affect the outcome of an election.

More recently, Gelman has been one of the key contributors to Stan, an open-source software that provides Bayesian statistical models to anyone with access to a computer. Stan is regularly sought out for a variety of uses in engineering, business, medicine, physical sciences, and social sciences.

“Statistics is said to be the science of defaults. One of our challenges is to defaultize things.”

Get Your Numbers Straight

In recent years, Gelman has sought to help Americans better understand the numbers that underpin the country’s increasingly polarized politics. In 2008, he co-authored Red State, Blue State, Rich State, Poor State: Why Americans Vote the Way They Do, which took aim at some of the common misperceptions of American voters that had emerged in the early years of the 21st century.

Gelman was one of a handful of academics who in 2007 began contributing to the political science blog Monkey Cage, which eventually became so popular that it was acquired by the Washington Post, where Gelman still occasionally contributes statistical political analysis.

Gelman has become a noted critic of what he sees as shoddy academic studies and has called on the academic community to hold researchers to methodological higher standards.

Writing in the New York Times in 2018, Gelman said the problem was driven by a poor understanding of statistics: “The big problem in science is not cheaters or opportunists, but sincere researchers who have unfortunately been trained to think that every statistically ‘significant’ result is notable.”

Nevertheless, on his own blog, Gelman struck an optimistic tone about the future of research: “Compared to ten years ago, we have a much better sense of what can go wrong in a study, and a lot of good ideas of how to do better.”

Next Article