We are seeking a full-time Senior Researcher in AI Safety, Interpretability and Governance to join Michael Osborne's research group at the Department of Engineering Science (Central Oxford). The post is funded by the Oxford Martin AI Governance Initiative and is fixed-term to November 2029.
The post holder will be a member of a major research team developing technical analysis and tools for supporting the effective governance of AI. You will lead the Technical AI Governance programme and develop research questions within AI safety, interpretability, and technical governance. You will also contribute to the strategic vision and long-term planning of Oxford's research in technical AI governance.
You should possess a PhD/DPhil in machine learning, computer science, engineering, or a closely related field with significant post-qualification research experience. You should have specialist knowledge in mechanistic interpretability, model editing, unlearning, or other methods relevant to AI safety. The ability to lead and motivate a team of research staff is essential.
Informal enquiries may be addressed to Nikki Sun )
For more information about working at the Department, see
Only online applications received before midday on the 23rd October 2025 can be considered. You will be required to upload a covering letter/supporting statement, including a brief statement of research interests (describing how past experience and future plans fit with the advertised position), CV and the details of two referees as part of your online application.
The Department holds an Athena Swan Bronze award, highlighting its commitment to promoting women in Science, Engineering and Technology.