Lab Logo

Computational Minds and Machines Lab

University of Washington

The Computational Minds and Machines Lab at the University of Washington

Our mission is to reverse engineer social intelligence: how it works, why it works the way it does, and how it came to be. We aim to better understand how people learn from and cooperate with others to build more human-like and human-aligned AI systems that can cooperate with people. We work at the intersection of computational cognitive science and artificial intelligence and use methods that combine formal modeling approaches from multi-agent reinforcement learning, language models, game theory, and probabilistic programming with data from behavioral experiments and agentic simulations. These new tools allow us to answer fundamental computational questions about how minds and machines work and how they can better work together. Our research spans the following topics:

Cooperation

Humans cooperate with others to accomplish together what no individual could do on their own. We share the benefits of cooperation fairly and trust others to do the same. These abilities are unparalleled in other animal species and are still lacking in our most sophisticated artificial intelligence. Even a young child barely able to walk or talk can help others, divide the fruits of their labor fairly, and solve social dilemmas using rules, roles, and norms. Such feats are just the prelude to the unprecedented levels of cooperation present across modern social institutions such as governments, science, and enterprise. What cognitive processes and representations underlie our social intelligence and how do these abilities enable the distinct scale and scope of human cooperation? Can we build machines that cooperate with people as flexibly as a friend or colleague?

Social Learning

People readily infer the hidden contents of other people's minds and learn what they know, want, and feel often from just a few observations. From chipped stone tools to microchips, we build on what others have learned and those ideas accumulate and compound. This cumulative culture lets us stand on the shoulders of past giants and do more than any individual could accomplish in their own lifetime. What is the cognition that underlies social learning and how does it scale to cumulative culture and innovation? How do individuals contribute to collective intelligence through teaching and communication? Can we build machines with "Theory-of-Mind" that will accelerate the growth of human knowledge? Or will AI merely reinforce existing ideas and biases?

AI Alignment and Morality

As philosophers have noted: goodness has no image, evil has no taste, and virtue has no smell. How then do we learn moral values and raise moral children? Why are we moral and how do human values adapt across time and culture? Human morality has been shaped by our cultural and evolutionary history. Can we do for AI what nature did for us and create moral machines that are aligned with our values? Today, we are more capable than our AI agents and "parent" them through supervision and reinforcement. When the roles reverse, how should AI take care of us to protect and advance our autonomy?

We are recruiting talented individuals to join the lab. Learn more about joining.