Current Research
Multiagent systems, AI safety, game theory, and computational social science.
The Humans and Autonomous Agents Lab works on several aspects of multiagent systems that interact with humans in complex environments. We conduct both theoretical and experimental studies to understand the nature of human and automated reasoning within these systems. Our focus is interdisciplinary — drawing from computer science, engineering, psychology, and economics.
› 1 Autonomous Vehicles and Human Interaction
For some foreseeable future, self-driving cars will interact with human-driven vehicles and other road users. Game-theoretic models of human driving behaviour are a promising tool for testing and verification of AVs. We have developed a suite of models and methods at the intersection of game theory and behaviour planning and modelling for autonomous driving.
1.1 Driving Games
This work developed design choices for constructing hierarchical driving games, along with behavioural game-theory solution concepts, producing over thirty behaviour models evaluated for model fit and predictive accuracy on naturalistic data. Published in AAAI (2021).
1.2 Revealed Multiobjective Preferences
Driving is a multi-objective task and humans aggregate safety and progress objectives in context-specific ways. Based on rationalisability, we developed algorithms for estimating multiobjective aggregation of driving decisions. Published in AAMAS (2023).
1.3 Cognitive Hierarchy Models
Addresses model instability and uncertainty in dynamic traffic games by developing a finite-state transducer-based level-0 model within the level-k framework, and a generalised cognitive hierarchy model with nonstrategic, strategic, and robust layers to handle heterogeneous reasoners. Published in AAAI (2022).
1.4 Game-Theoretic Safety Validation
Novel safety validation methods for AV planners: a Quantal Best Response model for interpretable lane-change behaviour generation, and a hypergame-based Dynamic Occlusion Risk (DOR) metric for evaluating safety under dynamic occlusion. Published in IROS (2019) and ICRA (2022)
1.5 Taxonomy of Strategic Interactions
We developed a taxonomy that maps complex design-specific strategies in a driving game to a simpler, explainable taxonomy of traffic interactions — enabling regulators to reason over simplified strategies while protecting proprietary AV design decisions. Presented at NeurIPS Cooperative AI Workshop (2021).
› 2 Software and Societal Systems
2.1 Opinion Expression and Silence on Social Platforms
We developed a nested game-theoretic model showing how observed online opinion is shaped by user decisions, viewpoint organisations, and AI-powered recommender systems. The model explains how signals from ideological organisations drive rhetorical intensity and the rational silence of moderate users — producing apparent polarisation without changes in underlying beliefs. Published in JAIR (2026).
› 3 Norms and Values in Multiagent Systems
3.1 Normative Modules in Generative Agents
The Normative Module framework enables generative agents to recognise and adapt to normative infrastructure — learning through peer interactions which institutions a group treats as authoritative. Coordinated sanctioning behaviour shaped by the module leads to higher average welfare and more stable cooperative outcomes. Presented at Economic and Computation Workshop on Foundation models and Game Theory (2024).
3.2 Structural transparency of societal AI alignment through Institutional Logics
This paper introduces "structural transparency," a framework for analyzing how organizational and institutional forces shape AI alignment decisions — an area overlooked by existing transparency approaches that focus mainly on technical and informational aspects. It uses Institutional Logics theory to map alignment governance decisions to sociotechnical harms, offering analysts a way to examine the macro-level institutional dynamics behind AI value integration. (With Isam Faik).
3.3 Trust Formation in Human–Generative Agent Interaction
Work In progress.
› 4 Generative Multiagent Modelling for Socio-Ecological Systems
Designing institutions for social-ecological systems requires models that capture heterogeneity, uncertainty, and strategic interaction. In this work, we compare four LLM-augmented frameworks — procedural ABMs, generative ABMs, LLM-EGTA, and expert-guided LLM-EGTA — evaluated on a real-world case study of irrigation and fishing governance in the Amu Darya basin. Accepted at AAMAS 2026.