Accelerate your AI safety and governance research in Cambridge
Apply by January 18th to join our spring cohort!
The Meridian Visiting Researcher Programme brings talented researchers to Cambridge for 3-12 month residencies focused on AI safety and governance. We provide workspace, community, and research management for researchers working on technical alignment, governance, interpretability, forecasting, and other research ensuring AI benefits humanity.
Accelerate your research
Meridian provides an energetic and fast-paced environment for you to complete your research. Visitors report accomplishing more in a few months here than a year of working independently.
Plug into Cambridge's AI safety scene
Cambridge has become a hotspot for AI safety work in Europe. Being here will give you the opportunity to engage with a large and welcoming community of researchers with a wide range of expertise.
Structure with flexibility
With regular research management meetings, opportunities to present your work, and support towards publication, the programme provides enough structure to create momentum while providing plenty of space for deep independent research.
Find collaborators
Discover colleagues with complementary skills. Connections made at Meridian often result in long-term projects.
Receive advice and feedback from experienced researchers within our community, and dedicated research support to ensure your project remains on track.
Researcherslooking to pivot into AI safety, security or governance
Existing AI safety, security, or governance researchers looking to build their network and portfolio
Graduates of programmes such as MATS, MARS, ARENA, ML4G, SPAR, and ERA with AI safety knowledge looking to transition to full-time research
PhD candidates, recent graduates, or postdoctoral researchers exploring AI safety directions
PIs interested in incorporating AI safety into their research agenda
People based in Cambridge or willing/excited to work there and contribute to strengthening Cambridge as a hub for AI safety research
Research Areas
Meridian's Visiting Researchers Programme welcomes researchers working across a broad range of AI safety, security, and governance topics. Our priority research areas include:
Technical Safety Research
* Evaluating AI capabilities: Developing rigorous methods to assess the capabilities of advanced AI systems
* AI interpretability and transparency: Making AI systems more understandable to humans
* Model organisms of misaligned AI: Creating controlled examples of misalignment to study safety properties
* Information security for safety-critical AI systems: Securing AI systems against threats and vulnerabilities
* AI control and control evaluation: Designing and testing mechanisms for maintaining human control over AI
* Making AI systems adversarially robust: Ensuring AI systems remain reliable under adversarial conditions
* Scalable oversight: Developing methods to effectively supervise increasingly capable AI systems
* Understanding cooperation between AI systems: Studying multi-agent dynamics and cooperation mechanisms
Forecasting and Modeling
* Economic modeling of AI impact: Analyzing how AI development will affect economic systems
* Forecasting AI capabilities and impacts: Predicting the trajectory and consequences of AI development
* Identifying concrete paths to AI takeover: Mapping potential failure modes and their mitigations
Governance and Policy
* AI lab governance: Developing responsible practices for AI research organizations
* UK/US AI policy: Creating national frameworks for AI development and deployment
* International AI governance: Building coordination mechanisms across national boundaries
* Legal frameworks for AI: Addressing liability, regulation, and rights issues
* Developing technology for AI governance: Building tools to support effective AI governance
Ethics and Values
* AI welfare: Considering the moral status of artificial systems and ethical obligations toward them
* Value alignment: Ensuring AI systems act in accordance with human values and intentions
* Societal impacts of transformative AI: Analyzing broader implications for society and human welfare
This list is not exhaustive, and we welcome applications from researchers working on related areas not explicitly listed above.
“The VRP helped me to connect with the AI safety community, grow my network, broaden my expertise in AI safety and socialize with people just like me.” — Igor Ivanov, Visiting Researcher, Autumn 2025 Cohort
“I highly recommend Meridian's Visiting Researcher Programme, especially for independent researchers. Beyond providing a research space, Meridian involved us in their frequent AI safety events, talks, and symposiums, and high-profile external researchers often drop by. Meridian also provided useful individualized support with grant writing, finding collaborators, and publication.” — Dan Wilhelm, Visiting Researcher, Autumn 2025 Cohort
#J-18808-Ljbffr