Agentic refusal—the capacity of AI agents to refuse instructions—is a key instrument for AI risk mitigation. However, it poses many unanswered questions of urgent regulatory concern. For instance, how does refusal interact with the rights and responsibilities of users, as well as the owners of systems and platforms with which agents interact? What should be the standards for refusal architectures? In cases of legally mandated refusal, how is compliance verified? Under what conditions should users have the capacity to override a refusal? Recent research also suggests that reliable refusal is not a technical fait accompli. Chatbot-based refusal architectures might not, for example, generalize to agents, and the interaction between these systems and other software, including other agents, might introduce novel challenges. This calls for a fuller mapping of refusal failure modes and their corresponding implications. In what cases might a refusal system be a critical single point of failure? How might refusal workflows be harder to implement in multi-agent collaboration? And what might be the process of apportioning responsibility for refusal failures? Through a detailed forecasting of these questions, the talk will conclude by outlining a framework of technical, legal, and ethical inquiry by which to resolve them.