This website uses cookies.

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you’ve provided to them or that they’ve collected from your use of their services.

Welcome to History of Data Science. Discover the stories of heroes who transformed our daily lives!

BROUGHT TO YOU BY Dataiku Dataiku

xperiences-ico Xperiences
Jerome Friedman: Applying Statistics to Data & Machine Learning
Machine Learning / Classic ML

Jerome H. Friedman: Applying Statistics to Data and Machine Learning

4 min read
American statistician Jerome Harold Friedman (1939) has played a leading role in putting statistics at the service of Machine Learning and data mining. Through his innovative methods, he has helped find new ways to analyze ever larger and more complex data sets.

Jerome “Jerry” Friedman was born and grew up in a small Californian village. His mother was a housewife and his father owned a laundry. Nothing destined him to become a leading light in modern statistics. Growing up, he was a chronic underachiever more interested in electronics and building radios than schoolwork. His headteacher suggested giving the local Chico State College a go and, if that didn’t work out, enroll in the army. Not sure what he wanted to study, he fell into physics which finally sparked his academic curiosity and led to a PhD from Berkley.

“Statistical research in data analysis is definitely overlapping more with machine learning and pattern recognition.”

From physics to statistics

As a research physicist at Lawrence Berkeley National Library, he immersed himself in high-energy physics, before joining the Stanford Computation Research Group in 1972 — where he worked for more than 30 years. Taking on various visiting professor positions, he was appointed half-time professor of statistics at Stanford University in 1982.


Fascinated by statistics, he started exploring their application to computers. He is most famous for co-authoring “Classification and Regression Trees” (CART) with Leo Breiman — the basis of modern decision tree concepts and an essential tool in data analysis. Not to mention “Stochastic Gradient Boosting for additive regression models” in 1999, which would prove an extremely powerful technique for building predictive models.

Harnessing the data revolution

With Jerome Friedman, there is never a dull moment. He made key contributions to statistics and data including nearest neighbor classification, logical regressions and high dimensional data analysis. Many of these achievements helped pave the way for machine learning, and his methods and algorithms are essential to many modern statistical and data mining packages.

In publications like From Statistics to Neural Networks: Theory and Pattern Recognition Applications, he explored the relationship between statistics and predictive learning and artificial neural networks (ANN), setting the groundwork for ever close collaboration between these disciplines.

Next Article