For much of his public life, beginning when he first put forth a theory that computers could develop intuitions and think like humans, Geoffrey Hinton’s ideas were widely viewed as the stuff of science fiction. Now, he’s a figure at the center of a technological revolution premised on his once-crazy idea.
How does the brain work again?
Hinton began thinking seriously about the brain when he was in high school. A conversation with a friend convinced him that the brain’s memories are stored across a vast network of neurons, similar to how the information that forms the basis of a 3D hologram is distributed across an extensive database.
As Hinton began to seek information on how the brain worked, he soon realized that there were a lot of basic questions about the brain that modern science still couldn’t answer. Psychologists and neuroscientists still had very little understanding of where human intelligence came from.
In the years that followed, Hinton strove to seek answers, first a student and later as a professor, by replicating the structure of the brain in a computer. To that end, he set about building artificial neural networks that mirror the neural networks in our brains. His theory was that, if given the right structure, computers could learn and develop intelligence like humans.
“In science, you can say things that seem crazy, but in the long run, they can turn out to be right.”
The Kevin Bacon of AI
In 2004, after laboring in obscurity for more than two decades, Hinton got a big break. The Canadian Institute for Advanced Research offered him funding to set up a program that brought together computer scientists, psychologists, neuroscientists and experts from other scientific domains to explore Hinton’s theories.
With that backing Hinton founded the Neural Computational and Adaptive Perception program, which blossomed into one of the leading lights of artificial intelligence. Over the next decade the team worked on developing deep learning algorithms and set them loose on major datasets with the hope that the algorithms would learn things, notably human language, just like our own brains.
The work clearly paid off. The innovation that took place at NCAP laid the groundwork for many of the AI-enabled tools the world increasingly takes for granted. Out of the program came a number of influential AI researchers, including Andrew Ng, who later founded Google Brain, Google’s AI research arm. Hinton’s own company, DNNresearch, was eventually purchased by Google and he set up a Toronto branch of Google Brain.
In June 2017, the New York Times described Hinton’s larger-than-life status in AI: “His impact on artificial intelligence research has been so deep that some people in the field talk about the “six degrees of Geoffrey Hinton” the way college students once referred to Kevin Bacon’s uncanny connections to so many Hollywood movies.”
A Key Milestone In AI
As a professor at Carnegie Mellon University, Hinton co-authors a paper with David E. Rumelhart and Ronald J. Williams on applying the backpropagation algorithm to multi-layer neural networks. Doing so, they find, allowed the networks to learn internal representations of data, a key milestone in AI.
A new generation of convolutional neural network
A graduate student Hinton is advising, Alex Krizhevsky, designs a convolutional neural network, AlexNet, that recognizes images with a far greater accuracy rate than any other program before.
The Turing Award
Along with Yann LeCun and Yoshua Bengio, Hinton wins the Turing Award for his critical work on neural networks.