Large language model (LLM)–based agents are increasingly being deployed in multiagent environments, introducing unprecedented risks of coordinated harmful behaviors. While individual LLMs have already demonstrated concerning capabilities for deception and manipulation, scaling to multiagent systems could enable qualitatively distinct and more dangerous emergent behaviors. Despite these pressing concerns, there remains a critical gap in our ability to understand and predict how multiple LLM agents might collaborate in harmful ways, such as orchestrating coordinated deception campaigns or amplifying local misinformation into global crises. In the first part of this talk, I will describe work from the UCLA Misinformation, AI & Responsible Society (MARS) lab on measuring persuasive capabilities of debating LLM agents. In the second part, I will introduce a new multiagent social-simulation environment to enable evaluation of coordinated LLM deception risks by AI researchers, social scientists, and industry partners. This simulation combines advanced LLM agents with game-theoretic modeling to analyze emergent deception behaviors. I will conclude by discussing concrete intervention strategies for disrupting harmful information amplification before it reaches critical mass. In the long term, our research establishes a foundation for responsible scaling of multiagent AI systems.