How do you code common sense? I have been thinking about that this week amid a flood of announcements on new agentic AI systems.
The AI industry has for decades fallen short of imparting common sense knowledge into the models they trained, in part because the technology wasn’t there yet. Today’s generative platforms have learned a lot from the sheer volume of information they have gobbled up, but tomorrow’s agentic systems are going to need common sense specific to an individual person’s context, which can’t be learned from large data sets.
To take an example from Amazon Scholar Michael Kearns, who also teaches computer science at the University of Pennsylvania, humans use common sense when deciding whether to close and lock doors, but those choices are also entirely individualistic. I always lock my apartment door, except when I’m just stepping out for a moment. I keep my office door open when I’m casually working but closed when I need to focus. Every choice is reasonable to me, but someone else may make different decisions.
In the digital realm, there are endless choices humans make in their work and personal lives that agents are somehow going to need to understand to reach the next level of intelligence and take actions on our behalf. It will take time for agents to monitor and make sense of a person’s habits, to the point where they can predict them.
“Agents will probably be a bit clunkier in the beginning because they won’t have this yet,” Kearns told me. But whichever company can individualize agentic AI judgement first could yield the next buzzy ChatGPT moment and catapult them to the front of the AI race.