CEOs and researchers alike expect AI to exceed human capabilities within our lifetimes.¹ 50% think there's at least a 5% chance of "extremely bad outcomes, e.g. human extinction" from AI.² Aligning AI systems with human values is one of the most important problems of our time.
LEARN MORE →Through fellowships and events, we're building a community of future AI safety researchers, policymakers, and communicators at Northwestern.
SPRING 2025 INTRO FELLOWSHIP →