Experimental Evidence on Decision-Making: Implications for Governance in an AI-Enabled Future
-
13 Feb 2026
14:00-15:00, Butler Room, Nuffield College
- Add to Calendar
Dr. Brian Scholl is a leading economist at the intersection of regulation, financial markets, and artificial intelligence. He currently serves as a Staff Regulatory Researcher at Norm Ai, where he leads work on quantifying risk and designing data-driven, AI-enabled compliance programs for financial institutions.
Rapid advances in artificial intelligence have renewed longstanding questions about human decision-making, institutional design, and governance. Is this technological moment fundamentally different? How do new systems interact with human cognitive limitations? And under what conditions do they enhance—or undermine—trust, legitimacy, and social welfare?
This talk draws on a series of large-scale behavioural experiments I conducted in the context of financial regulation to examine how individuals make consequential decisions in environments characterized by complexity, asymmetric information, and institutional mediation. Retail investors routinely face choices—whether to participate in markets, how to allocate assets, how to interpret disclosures, and whether to rely on advice—that exceed the capacities of unaided human cognition. Traditional regulatory tools, particularly disclosure, have struggled to address these challenges and, in important respects, have fallen short of their intended protective role.
I review evidence from multiple randomized experiments that investigate how features of choice architecture shape behaviour, including linguistic complexity and jargon, reference points and performance benchmarks, and reliance on expert advice. Across settings, the findings reveal both the promise and the fragility of behavioural interventions: modest changes in presentation can meaningfully improve decisions, yet the same mechanisms can also be exploited by intermediaries facing competing incentives. In one set of studies, simplifying language and carefully structuring choice environments substantially improves comprehension and decision quality; in others, benchmark performance framing strongly influences investment choices even when underlying fundamentals are unchanged. A further experiment shows that individuals exhibit limited ability to screen the quality of financial advice, accepting poor guidance nearly as often as good guidance.
Taken together, these results highlight persistent limits to individual self-protection in complex systems and raise broader questions for the governance of AI-enabled decision support. While AI systems hold the potential to augment human capital, triage information, and personalize guidance at scale—benefiting individuals, institutions, and regulators alike—they also risk amplifying manipulation, opacity, and power asymmetries if left unchecked.
The talk concludes by using these experimental findings as a lens to prompt discussion about the design and regulation of AI-enabled institutions. Rather than treating AI as a replacement for human judgment, the discussion emphasizes how evidence on human behaviour can inform regulatory and institutional frameworks that are more trustworthy, legitimate, and aligned with long-run systemic stability—while acknowledging the risks such systems pose and the possibility that fundamentally new governance approaches may be required.
Talking to Machines Seminar Series
Friday, 13th February 2025 14.00 – 15.00
Hybrid Format: Butler Room, Nuffield College & Zoom