Safety cases—clear, assessable arguments for the safety of a system in a given context—are a widely used technique across various industries for showing decision-makers (e.g., boards, customers, third parties) that a system is safe. In this talk, we cover how and why frontier AI developers might also want to use safety cases. We then argue that writing and reviewing safety cases would substantially assist in the fulfilment of many of the Frontier AI Safety Commitments. Finally, we outline open research questions on the methodology, implementation, and technical details of safety cases.