A Matter of Principle? AI Alignment as the Fair Treatment of Claims

Day
Time
Session ID
Location
Feb 6, 2025
4:30–6pm
Track 06
CC7
Abstract:

The challenge of AI alignment centers upon what goals or values to encode in AI systems to govern their behavior. A number of answers have been proposed, including the notion that AI must be aligned with human intentions or that it should aim to be helpful, honest and harmless. Both suffer from critical weaknesses. On the one hand, they're incomplete: neither specification provides adequate guidance to AI systems deployed across various domains with multiple parties. On the other hand, the justification for these approaches is questionable and, I shall argue, of the wrong kind. More specifically, neither approach takes seriously the need to justify the operation of AI systems to those affected by their actions—or what this means for pluralistic societies where people have different underlying beliefs about value. To address these limitations, I'll develop an alternative account of AI alignment that focuses on fair processes. This account holds that principles that are the product of these processes are the appropriate target for alignment. This new approach meets the necessary standard of public justification, generates a fuller set of principles for AI that are sensitive to variation in context, and has explanatory power insofar as it identifies a set of formerly underappreciated ways in which AI systems may cease to be aligned.

Speakers: