Aligning the objectives of artificial intelligence agents with those of humans can greatly enhance these systems’ ability to flexibly, safely, and reliably meet humans’ goals across diverse contexts from space exploration to robotic manufacturing. However, it is often difficult or impossible for humans, both expert and nonexpert, to enumerate their objectives comprehensively, accurately, and in forms that are readily usable for agent planning. Value alignment is an open challenge in AI that aims to address this problem by enabling agents to infer human goals and values through interaction. Providing humans with direct and explicit feedback about this value-learning process through explainable AI (XAI) can enable humans to more efficiently and effectively teach agents about their goals. In this talk, I will introduce the transparent value alignment (TVA) paradigm, which captures this two-way communication and inference process, and will discuss foundations for the design and evaluation of XAI within this paradigm, including human-centered metrics for alignment, models of agent transparency, and algorithms for automatic generation of user-tailored explanations.