Real Ethics for Artificial Agents?

Day
Time
Session ID
Location
Feb 6, 2025
4:30–6pm
Track 06
CC7
Abstract:

With the continuing development of AI, our world will increasingly be shared with highly capable artificial agents operating with a significant degree of autonomy. This raises a host of safety concerns, and while various restrictions and guardrails are being designed to be imposed upon such agents, the openness and complexity of our shared social world requires that AI agents have some capacities of their own for identifying and responding appropriately to ethical features of situations and actions. While it might seem that such responsiveness would require full moral personality, a great deal of interpersonal ethics—including many of the aspects most important for safety, such as those found in traditional social contract theory—depends upon aspects of agency, planning, self-regulation, shared interests, and mutual restraint that might be within the reach of emerging AI agents. Indeed, recent experimental work with multiagent AI has begun to provide some evidence of this. Current AI systems and agents owe their open-ended capabilities primarily to learning rather than preprogramming. Might individual and social learning processes also enable AI agents to acquire open-ended competence with ethically relevant features, gaining better grounding and becoming potential allies in promoting AI safety?

Speakers: