An agentic AI would be an autonomous entity that can plan over the long term to achieve complex goals; these abilities would be extremely commercially valuable for replacing many human tasks. Disadvantages include the potential to create short-term economic chaos (due to labor-market disruptions), spell the end of humanity (if human control is lost), or introduce massive disruption, in the hands of terrorists or malicious state actors. Right now, we do not know how to build superintelligent AI agents that would be controlled and safe, and we see clear signs of deceptive behavior in current frontier AIs. Another danger lies with plans to use potentially unreliable and scheming agentic AIs to help us design future AIs. Instead, it is worthwhile to ask if and how we could build nonagentic AIs with broad knowledge but with no self, no persistent state, no situational awareness and no goal or reward function of their own. We could still do what many humans want AI to do (e.g., massively accelerate scientific advances in medicine or climate), without posing the catastrophic risks named above. This talk will outline a research plan for doing so, avoiding the pitfall of imitating humans (as with large language model pretraining) by instead learning how to probabilistically explain data with interpretable latent causes, so that we can also query the probability of such latent events. Such nonagentic AIs could fuel scientific discovery over the whole cycle of hypothesis generation and experimental design and be used as powerful (yet trustworthy) guardrails on top of agentic AIs. Finally, they could eventually be used to help us answer the question of how to design guaranteed-safe AGI agents, if it is at all possible.