Technical AI governance refers to the application of technical analysis and tools to support the effective governance of AI systems by identifying areas for intervention; assessing governance options; and enhancing mechanisms for enforcement, incentivization or compliance. This talk focuses on two examples of technical AI governance: transparency considerations in the context of training data and evaluation methodologies. We will explore the information needed to judge the robustness and validity of evaluations, as well as dataset-auditing techniques that ensure training data meets ethical and legal standards. By addressing these transparency challenges, this talk illustrates how collaboration between technical researchers and policymakers can advance the development of robust and transparent AI systems.