AGI promises significant benefits, but it also poses risks. While there is increasing agreement on risks of misuse and paths to assess and mitigate them, two unhealthy debates continue to plague the AI community. The first discusses whether misalignment is a real threat or just scifi—do we need to proactively work on avoiding it, or will we just figure things out along the way? The other is on whether this is all just a distraction from present-day harms. In this talk, I’ll address these two debates head-on, outlining my proposal for a path to productive solutions across the spectrum of risks.