Cracking the Code of Common Sense
- Heidi Mitchell
The concept of teaching computers sound judgment based on simple perceptions has been the black box of AI since the discipline’s inception. Distinguished researcher Yejin Choi—recipient of two Test of Time Awards this year—has never believed it was impossible.
Test of Time Awards are some of the most coveted prizes in academia. Rather than rewarding breakthrough research of the day, they award researchers whose papers from a decade ago had a lasting impact, opening a new stream of research.
Which means it’s hard to get one. And it’s nearly impossible to get two—to have thought up two whole new areas of research that no one else had considered and written peer-reviewed papers on each.
Yejin Choi doesn’t like to talk about herself, let alone boast, but this Brett Helsel Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington may be the only person in the world to receive two Test of Time Awards in the same year. No wonder she’s considered an AI luminary.
The first award is the 2021 Longuet-Higgins Prize for her paper, Baby talk: Understanding and generating simple image descriptions, which was among the first to explore the idea of captioning images using natural language, thus bringing together computer vision and natural language processing. Choi, who is also a senior research manager at the Allen Institute for AI (AI2), did her research in 2011, back when she was a new faculty member at Stony Brook University. “This was very different from the mainstream research back then,” says Choi. “Object recognition itself was far from being solved and even generating a good grammatical sentence with NLP was hard.” But her computer vision colleagues had this belief that they could caption images using natural language—a conundrum for NLP at the time. “They wanted to solve something bigger than writing sentences. Part of me thought that was crazy, part of me thought it would be interesting.”
Choi and her team broke from the old methods of training computers to categorize images and instead used a computer vision object detector to look at the relationship between objects, then relied on language-based association scores. “In life, you might wonder whether you are seeing a dog or a wolf in an image. But when there is a human nearby, you might infer it’s less likely that the person is walking a wolf than a dog,” Choi explains. “Some of these visual correlations come from language. People talk about ‘walking a dog,’ for example. So we leveraged the associations in natural language to weed out noisy computer detection.”
The second Test of Time prize Choi received this year was from the Association for Computational Linguistics, for a paper she co-authored on deceptive opinions in online reviews back in 2011—long before politicians reflexively used the term “fake news.”
“People didn’t see deceptive comments as a relevant NLP research question back then,” admits Choi. “But I had a hunch that there might be some stylistic cues that the machine could detect and be applied elsewhere.”
For this research, Choi and her colleagues worked with psychologists and computational linguistics experts to create and compare three approaches to detecting deceptive-opinion spam in online reviews. They ultimately developed a classifier that was nearly 90% accurate on detecting fakes among their dataset. Their contribution to the field of AI was to reveal a relationship between deceptive opinions and imaginative writing. That’s pretty subtle stuff.
Choi has always been one to go against the grain. After undergraduate at Seoul National, she landed a great job as a systems engineer working on high-performance server designs at Microsoft. Still, she decided to pursue a PhD in natural language processing. “I was excited about AI before AI became hot again,” she says with a laugh. “The trendy topics were computer networks and databases. I was told it was still wintertime for AI. But I figured, you only live once. And if AI didn’t work out, I could always come back.” Choi was fascinated with “the great puzzle” around AI’s ability to mimic human intelligence. At Cornell she studied NLP and, in the midst of the global financial crisis, still managed to gain a tenured spot at SUNY Stony Brook. She was the first NLP practitioner at her university.
As a senior research manager at AI2, Choi’s main focus is on teaching AI common sense, to read between the lines so as to understand what is unsaid and thus to truly communicate. “Common sense seems to be the biggest confounder confronting AI practitioners,” says Choi. “It was a failure back in 1970! But AI has improved dramatically and has broad applications. Without a common-sense understanding of the physical world and society, AI will never be intelligent enough to operate smartly and safely.”
At AI2, Choi is working on Mosaic, a project aiming to teach common-sense reasoning to machines. Along with dozens of datasets and knowledge graphs, Choi’s Mosaic team built two systems: Comet, a common-sense transformer, and Atomic, an “atlas of machine common sense.” Fed to GPT-3, these models achieve astounding results. “The sort of things you can do through our online demo is something people have never seen before,” says Choi, with unusual pride. “It is a remarkable achievement. It’s also the reason why I get invited to so many talks.” Choi has been burning up the speaking circuit lately.
Atomic and Comet are providing training sets that can improve downstream applications, like storytelling and crafting imaginative fantasy games. “Broadly, anything that’s human-centric could benefit from common-sense models. They have a much better understanding of human nature and society than other models,” says Choi.
Looking back on her earlier research, the professor recognizes a pattern: she chooses research paths that most thought were impossible to solve. And she is getting close. Comet has a 77.5% success rate, which isn’t much lower than humans, and it’s getting pretty good at captioning stills from movies, which hearkens back to Choi’s 2011 paper back at Stony Brook.
“It’s funny,” says Choi. “I was worried about these papers getting accepted back then, let alone getting awards.”