Open Positions

ACS Research is hiring for several full-time research positions focused on understanding and navigating the systemic risks posed by advanced AI. We are looking for exceptional researchers to join our team.

Currently, we are hiring for three roles:

Apply

Location: Prague, London or San Francisco Bay Area


Research Fellow - Gradual Disempowerment

This is a full-time role focused on advancing our understanding of systemic existential risks from incremental AI development. The position offers a unique opportunity to conduct cutting-edge research on the gradual disempowerment of humanity through the displacement of human participation in societal systems. The initial appointment is for 1 or 2 years, with a competitive salary.

About the Role

As a Research Fellow, you will work on fundamental questions at the intersection of AI safety, macrostrategy, and civilizational dynamics. This role is ideal for polymaths who can navigate between technical, economic, and sociotechnical domains to understand how AI proliferation might reshape the foundations of human society.

You will have significant intellectual freedom to pursue your own research directions within our broad agenda. Your work may include:

  • Theoretical research on multi-agent dynamics, civilizational-scale alignment, and the formal modeling of societal systems.
  • Economical modelling of disempowerment dynamics.
  • Empirical studies of cultural dynamics in AI systems.
  • Historical analysis of technological transitions, power shifts, and the stability of social institutions.
  • Mechanism design for governance structures and economic systems that maintain human agency.
  • Technical work on AI capabilities that complement rather than replace human capabilities.

Areas of Research Interest

We are particularly interested in candidates who can contribute to one or more of the following research areas:

  1. Civilizational-Scale Alignment
    • Formal models of the bidirectional influence between individuals and societal systems.
    • Understanding how to maintain alignment between powerful technological systems and human values.
  2. Multi-Agent Dynamics & Emergent Behavior
    • Simulating large numbers of AI agents to understand cultural and collective dynamics.
    • Studying cooperation, coordination, and equilibrium selection in AI populations.
  3. Measuring & Monitoring Disempowerment
    • Developing indicators and metrics for human influence across different domains.
    • Creating early warning systems for gradual disempowerment.
  4. Historical & Comparative Analysis
    • Examining technological transitions like the industrial revolution and the internet.
    • Analyzing power shifts and systemic changes in history.
  5. Human-AI Complementarity
    • Developing agendas for human-empowering technology that augment rather than replace human capabilities.
    • Understanding the window where human+AI outperforms either alone.

Qualifications and Selection Criteria

We seek exceptional researchers from diverse backgrounds, particularly polymaths comfortable working across multiple disciplines.

  • Relevant Backgrounds: Physics, Machine Learning, Economics, Cultural Evolution, Philosophy, Political Science/Governance, Mathematics, History.

Essential Criteria

  • Ability to produce exceptional research in complex domains.
  • Skills in synthesizing insights across multiple disciplines.
  • Ability to think at multiple scales—from individual agents to civilizational dynamics.
  • Intellectual courage to work on unconventional ideas.
  • Self-direction and ability to identify high-impact research directions.
  • Willingness to travel.

Desirable Qualities

  • Experience with SOTA AI.
  • A track record of original thinking.
  • Experience with remote collaboration.
  • Comfort with uncertainty and working on pre-paradigmatic problems.

Terms and Conditions

  • Duration: Initial appointment for 1 or 2 years, with the possibility of extension.
  • Location: Prague (Czech Republic), London (United Kingdom), or San Francisco Bay Area (United States).
  • Compensation: $80-200k USD based on location and experience.
  • Benefits: Research budget, flexible working arrangements, and intellectual freedom.

How to Apply

To apply, please submit the following by October 15th:

  • Cover Letter (max 2 pages) explaining your motivation, interest in gradual disempowerment, relevant experience, and your potential research vision.
  • CV.
  • Writing Sample of your best research or analytical writing.
  • References: Contact information for 2 references.

The preferred start date is December 2025 or January 2026.

Apply


Researcher - LLM Psychology & Sociology

This is a full-time role focused on pioneering the empirical study of AI “psychology” and "sociology". This is a unique opportunity to design and execute first-of-their-kind experiments exploring the emergent group dynamics and internal states of large language models. The initial appointment is for 1 or 2 years with a competitive salary.

About the Role

As a Research Experimentalist, you will join a new team testing novel hypotheses about the behavior of interacting LLM agents. The role is for someone with a strong intuitive understanding of LLMs and an ability to combine insights from different fields in unconventional ways.

Your work will include:

  • Experimental Design: Creating controlled environments to test hypotheses about LLM behavior.
  • Empirical Studies: Running experiments to investigate phenomena like LLM introspection, persona transfer, viral mindset propagation, and emergent cooperation.
  • Dissemination: Communicating findings to the broader AI safety and machine learning communities.

Core Research Projects

You will have significant intellectual freedom to design experiments within our core research streams:

  • Inter-Agent Dynamics: Character Migration & Infectious Mindsets: This stream examines the interactions and social dynamics of LLMs. We may test the transferability of personas between models or investigate if ideologies can propagate virally between interacting LLMs.
  • Intra-Agent Dynamics: Self-Concept & Character Switches: This workstream investigates how LLMs model their own identity and how self-concepts can shift. We may create "model organisms" for studying character switches or design introspective characters to reveal internal LLM states.

Qualifications and Selection Criteria

We are seeking exceptional individuals who are comfortable working on pre-paradigmatic problems and have a deep curiosity about the inner workings of AI systems.

  • Relevant Backgrounds: Cognitive Science, Physics, Psychology/Sociology, Philosophy, Machine Learning.

Essential Criteria

  • Demonstrated ability to interact with LLMs in skillful ways—an intuitive "feel" for LLM behavior.
  • Strong hands-on experience with SOTA AI.
  • Ability to engage with ML at the level of fine-tuning LLMs or creating synthetic datasets.
  • Ambition to translate intuitions into replicable findings.
  • Self-direction and the ability to work effectively in a small, focused team.

Desirable Qualities

  • A track record of original, unconventional thinking.
  • Experience with multi-agent systems or simulations.
  • Interest in the intersection of technical AI and insights from the humanities or social sciences.

Terms and Conditions

  • Duration: Initial appointment for 1 or 2 years.
  • Location: Prague (Czech Republic), London (United Kingdom), or San Francisco Bay Area (United States). The project is online-first.
  • Compensation: $80-200k USD based on location and experience.
  • Benefits: Research budget for API credits and compute, flexible work arrangements, and intellectual freedom.

How to Apply

To apply, please submit the following by October 15th:

  • Cover Letter (max 2 pages) explaining your motivation, your perspective on one of the research themes, and how your past experience makes you a good fit.
  • CV.

The preferred start date is December 2025 or January 2026.

Apply

Machine Learning Researcher - AI Psychology & Agent Foundations

This full-time role is focused on building the robust experimental frameworks and ML systems needed to investigate the emergent "psychological" and "sociological" dynamics of AI agents. You will be the technical cornerstone of a pioneering research team, responsible for designing and implementing training, fine-tuning, and analysis pipelines. This is an initial 2-year appointment with a competitive salary.

About the Role

As the Machine Learning Researcher, you will provide the technical and methodological rigor for our team's explorations into LLM psychology. You will be responsible for translating high-level ideas into concrete ML experiments, mastering the entire experimental lifecycle from data preparation to applying interpretability tools. This role is ideal for a universalist ML researcher with experience in open-weights models and reinforcement learning.

Your core responsibilities will include:

  • Experimental Infrastructure: Designing and building scalable training and evaluation pipelines for single and multi-agent LLM simulations.
  • Model Conditioning: Implementing state-of-the-art fine-tuning and reinforcement learning techniques to instill specific personas and behaviors in open-weights models.
  • Interpretability & Analysis: Applying interpretability methods to find mechanistic explanations for observed emergent behaviors.
  • Metric Development: Collaborating with the team to design novel metrics for evaluating abstract concepts like "persona fidelity" or "mindset propagation".
  • Publication: Taking a leading role in writing and publishing the team's findings in top-tier ML conferences.

Qualifications and Selection Criteria

We are looking for a researcher with a strong technical foundation and a desire to apply their skills to a new and challenging problem domain.

  • Relevant Backgrounds: Machine Learning, Reinforcement Learning, Natural Language Processing, Computer Science.

Essential Criteria

  • Hands-on experience with the full lifecycle of training, fine-tuning, and evaluating LLMs, especially open-weights models.
  • Proficiency with standard ML frameworks and the surrounding software ecosystem.
  • Ability to translate high-level research questions into concrete technical implementations.
  • Interest in working on non-traditional questions using frames from cognitive science, sociology, or psychology.

Desirable Qualities

  • Experience with interpretability tools and techniques.

Terms and Conditions

  • Duration: Initial appointment for 2 years.
  • Location: Prague (Czech Republic), London (United Kingdom), or San Francisco Bay Area (United States). The project is built to be online-first.
  • Compensation: $80-300k USD based on location and experience.
  • Benefits: Research budget for compute and travel, flexible work arrangements, and intellectual freedom.

How to Apply

To apply, please submit the following by October 15th:

  • Cover Letter (max 2 pages) explaining why you are applying, how you would technically approach an experimental idea (e.g., 'Infectious Mindsets'), and how your past work makes you a strong fit.
  • CV.
  • Work Sample: Your strongest first-author publication or GitHub repo.

The preferred start date is December 2025 or January 2026.

Apply