Ian Goodfellow’s career in machine learning is just getting started, but he has already earned himself a place in the AI Hall of Fame due to a couple of groundbreaking discoveries that will hopefully lead to better-performing and safer AI.
Teaching machines how to see numbers like humans
These days, it’s increasingly rare to meet someone under 40 who knows how to get somewhere without the assistance of a smartphone. Not only do we take for granted that the address that we type into Google Maps will take us exactly where we need to go, but the prospect of relying on any other method (a paper map, asking directions) seems ludicrous.
As Wired explained in 2014 however, sometimes GPS doesn’t take you exactly where you needed to be: “(You) search for a specific building, but end up with only a vague location that doesn’t tell you which end of the street you need to be.”
A team of Google researchers, led by Goodfellow, designed a convolutional neural network that automatically transcribed the addresses of every building and house that its army of trucks has photographed.
Now, numbers are the one thing that computers should understand very well, right? Well, it’s not so simple when you consider the wide range of sizes, fonts, and colors used for addresses around the world. The neural network had been trained to recognize — with human-like accuracy — that a 123 is the same as
123 or 123 or 123. Goodfellow’s team accomplished this by training the network on 200,000 images of single digits.
The project was a stunning success. It was able to record every address in France, a country of 65 million people, in about an hour.
Showing the world AI’s scary weaknesses
Goodfellow has led important research on a concern that is always lurking for AI: adversarial attacks. He is one of a number of researchers who has shown that AI systems can be tricked into doing things against the wishes of their owners.
Goodfellow found, for instance, that just a slight manipulation to a photo could get an otherwise accurate system to consistently misidentify the object.
“We show you a photo that’s clearly a photo of a school bus, and we make you think it’s an ostrich,” he told Popular Science in 2016.
In another experiment, Goodfellow was able to create a sound that would go largely unnoticed by nearby people — some garbled white noise — that neural networks behind voice activation systems on phones would register and usually misinterpret. The sound he created was most often interpreted as coming from a horse.
These experiments are important to understanding AI vulnerabilities and how they could be exploited in crime or terror. Fortunately, Goodfellow found that training the systems in identifying false images as well as the correct ones can go a long way in reducing their vulnerability to attack.
Goodfellow spent his first years out of school working for Google, but in 2019 he was hired by Apple to head its machine learning operation. Who knows what he’ll do in the coming decades!
Goodfellow Receives his Ph.D.
Ian Goodfellow receives a Ph.D. in machine learning from the University of Montreal under the supervision of legendary AI visionary Yoshua Bengio.
Generative Adversarial Networks Paper
Goodfellow publishes paper on generative adversarial networks that describes ways that neural networks can be tricked into misidentifying objects.
Named on List of 100 Global Thinkers
Foreign Policy names Ian Goodfellow to its annual list of 100 Global Thinkers .