The Alignment of Complex Systems Research Group (ACS) is an interdisciplinary research group studying questions about multi-agent systems composed of humans and advanced AI systems.
Our goal is to develop an understanding of advanced AI systems and the complex ways in which they are embedded in and interact with humans and human institutions. Building on this understanding, we seek to contribute to making such systems safe and aligned towards shared human values like freedom, kindness and justice.
Our research draws on insights and methods from a wide range of fields such as machine learning, information theory, network theory, active inference, philosophy of science, ecology and more. Our team includes philosophers and ML engineers, physicists and cognitive scientists.