
In the past decade, AI’s success has led to uncurbed enthusiasm and bold claims—even though users frequently experience errors that AI makes. An AI-powered digital assistant can misunderstand someone’s speech in embarrassing ways, a chatbot could hallucinate facts, or, as I experienced, an AI-based navigation tool might even guide drivers through a corn field—all without registering the errors.
Over the past 25 years, I have worked on projects including traffic light coordination, improving bureaucracies, and tax evasion detection. Even when these systems can be highly effective, they are never perfect.
For AI in particular, errors might be an inescapable consequence of how the systems work. My lab’s research suggests that particular properties of the data used to train AI models play a role. This is unlikely to change, regardless of how much time, effort, and funding researchers direct at improving AI models.
Nobody—and nothing, not even AI—is perfect
As Alan Turing, considered the father of computer science, once said: “If a machine is expected to be infallible, it cannot also be intelligent.” This is because learning is an essential part of intelligence, and people usually learn from mistakes. I see this tug-of-war between intelligence and infallibility at play in my research.



