The ERA Team

  • Nandini Shiralkar

    Nandini Shiralkar

    FOUNDER

  • Robert Harling

    Robert Harling

    ASSOCIATE DIRECTOR

  • Olivia Benoit

    Olivia Benoit

    PROGRAMME MANAGER

  • Stephen Robcraft

    Stephen Robcraft

    OPERATIONS MANAGER

  • Richard Moulange

    Richard Moulange

    TECHNICAL GOVERNANCE RESEARCH MANAGER
    Richard is an AI–Biosecurity Fellow at the Centre for Long-Term Resilience and a PhD candidate in biomedical machine learning at the University of Cambridge. He was recently a Summer Research Fellow at the Centre of the Governance of AI, where he co-authored two papers: one on risk-benefit analysis for open-source AI and the other on responsible governance of biological design tools. His academic research focuses on out-of-distribution robustness for biomedical machine learning models. He earned his Bachelor’s and Master’s degrees from the University of Cambridge.

  • Morgan Simpson

    Morgan Simpson

    AI GOVERNANCE RESEARCH MANAGER

    Prior to joining ERA, Morgan held research fellowships with the Centre for the Governance of AI, and the Stanford Existential Risk Initiative. He is currently writing two White Papers for the Oxford Martin School AI Governance Initiative. He holds an M.A. in Science and International Security from King's College London, and a B.A. in Politics, Philosophy, and Economics from the University of York.

  • Yulu Niki Pi

    Yulu Niki Pi

    AI GOVERNANCE RESEARCH MANAGER

    Yulu Pi is a PhD researcher at the Centre for Interdisciplinary Methodologies, University of Warwick. She is also working on the IN-DEPTH EU AI TOOLKIT project for the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. Prior to her PhD, she gained experience in governmental and international organizations, including UNICEF and the World Meteorological Organization.

  • Rudolf Laine

    Rudolf Laine

    AI SAFETY RESEARCH MANAGER

    Rudolf recently ran an LLM evaluation project at Owain Evans's lab. Before this, he did the MATS research internship, and a master's and undergrad in computer science at the University of Cambridge.

  • Fazl Barez

    Fazl Barez

    AI SAFETY RESEARCH MANAGER

    Fazl is a Research Fellow at the Torr Vision Group, University of Oxford, where he works on safety and interpretability. Fazl also holds affiliations with the Centre for the Study of Existential Risk and Krueger AI Safety lab, both at the University of Cambridge, and the Future of Life Institute.