Preparing for a new ERA of AI safety research.

The Cambridge ERA:AI Fellowship provides aspiring AI safety & Governance researchers with an in-person, paid, 8-week summer research fellowship at the University of Cambridge. Applications for Summer 2024 have now closed.

Addressing the array of risks posed by advanced AI requires both technical and governance approaches. We host researchers from both of these domains, and we are especially excited about those uniting technical and policy research under a single research project.

01

Technical

New technologies present new opportunities for good; however, they are often also associated with novel risks. Advanced agents more intelligent than ourselves present new challenges: How can we be sure to maintain control of the trajectory of the future as we continue to hand over decision-making to these systems? Technical AI safety research aims to ensure that advanced AI systems are rigorously designed and built with appropriate safeguards to avoid harmful or unpredictable behaviour.

02

Governance

As awareness of risks from unaligned artificial intelligence among decision makers and the wider public increases, we see windows of opportunities for impactful research on AI Governance and Policy. Possible projects include foundational philosophical work on the ethics of AI governance, as well as applied research on concrete policy questions, such as analysing existing or proposed laws and regulations and exploring practical strategies for effective governance implementation.

The Cambridge ERA:AI Fellowship

  • Dates & Location

    The Cambridge ERA:AI Fellowship is held in Cambridge, England from July 1 to August 23.

  • Support

    Fellows are paid a competitive stipend, and we also cover food, transport, visas and lodging for the duration of the Fellowship.

  • Our Fellows

    We welcome early-career researchers, including undergraduates, from around the world who are interested in AI safety and governance research.

  • The Programme

    Fellows work on a research project with mentorship provided by our network of experienced researchers and influential policymakers.

01

Help mitigate risks posed by advanced AI

AI safety is centred on both creating technological solutions and implementing governance measures to mitigate the risks posed by advanced AI systems. We hope to address the complex challenges that arise from increasingly capable AI, including averting uncontrolled emergence of artificial general intelligence, maintaining human authority and autonomy, developing AI value alignment techniques, and establishing policy safeguards on the research, development and use of advanced AI.

02

Build your research portfolio

Research a topic relevant to understanding and mitigating risks from advances in AI systems with guidance through weekly mentorship from a full-time researcher in your field and daily conversations with other fellows and the community at the University of Cambridge.

03

Develop lasting connections

Spend a summer cultivating deep and life-long connections with other fellows, full-time researchers, and the AI safety community in Cambridge by living and working alongside them.

Our research partners

  • Centre for the Study of Existential Risk

  • Leverhulme Centre for the Future of Intelligence

  • Krueger AI Safety Lab

  • University of Cambridge