Mandatory and detailed reporting of serious incidents is an essential part of AI governance, and one that has been recently codified in the EU’s AI Act. While such reporting mechanisms have proven effective in sectors like aviation and healthcare, General Purpose AI presents unique challenges that require innovative approaches to serious incident investigation. This presentation will explore established practices in safety-critical industries, where incident reporting has successfully informed regulatory oversight and prevented recurrence of adverse events, whilst also addressing novel characteristics of GPAI meriting extra attention. Such novelties include difficulties related to causality analysis; inherently challenging in the paradigm of ever larger GPAI models and limited interpretability. The framework proposed in this presentation, designed to feed into the EU AI Act's ongoing Code of Practice drafting process, introduces a tiered reporting process for GPAI model providers to keep authorities appropriately informed, with a two-stage causality analysis approach and a final serious incident report template. The presentation will conclude with a mechanism for how serious incidents, and their reports, could be used to update AI policy, such as the Code of Practice, as risks begin to materialise.