AI systems are increasingly being used to make decisions about distributions of resources. This includes the distribution of benefits (e.g., jobs, loans, and educational opportunities) and costs (e.g., vehicle navigation systems, risk-assessment for criminal justice, and medical triage). Companies designing these systems have an ethical obligation to ensure that the recommendations and decisions of these systems are fair. But what does it mean for an AI system to be fair? This talk will address this problem from the perspective of a Rawlsian theory of algorithmic justice, where systems must be designed with a hierarchical set of priorities to: (1) provide minimal levels of accuracy for a task, relative to safety impacts in that domain, (2) ensure that decisions are caused by relevant features (especially under a person’s control) and not by irrelevant features (especially those which have been tools of historical oppression), (3) provide recognition for actual qualifying traits, and (4) realize the potential of individuals who would have enjoyed good outcomes, but for historical oppression.