We are looking for Research Engineers to build "gold standard" evaluations for catastrophic risks, in order to understand what AI Safety Level (ASL) to assign to models. Research leads on this team collaborate with engineers in one of our focus areas: CBRN, Cyber, Autonomy (this list may expand over time). This will have major implications for the way we train, deploy, and secure our models, as detailed in our Responsible Scaling Policy (RSP). The policy defines a series of capability thresholds - AI Safety Levels (ASLs) - that represent increasing risks - crossing an ASL threshold would trigger a commitment to more stringent safety, security, and operational measures, intended to handle the increased level of risk. Please note: We are currently only hiring for the Autonomous Replication and Adaption (Autonomy) threats workstream. We will also be prioritizing candidates who can start ASAP and can be based in either our San Francisco or London office. Responsibilities Research Engineers will be responsible for designing and running the evaluations needed to measure dangerous capabilities in models, and determine when we cross an ASL threshold. You'll lead projects with world class experts in fields like biosecurity, autonomous replication, cybersecurity, and national security, and experiment with new evals, in order to measure how risky AI systems are. Done well, this will inform decisions at the highest levels of the company. You may be a good fit if you: Have an ML-focused background and engineering and research skills (e.g. experience in Python) Have experience managing research programs of dozens of technical and non-technical experts Are driven to find solutions to ambiguously scoped problems Design and run experiments and iterate quickly to solve machine learning problems Thrive in collaborative environment (we love pair programming) Have experience training, working with, and prompting models For all workstreams, experience designing and building evaluations would be valuable, but is definitely not essential. For National Security threats workstreams, we will particularly value experience working on confidential or sensitive projects and demonstrated integrity, responsibility, and trustworthiness. We will also value domain specific knowledge, although it is not necessary. For ARA threats workstreams, we would value experience with language model agents, although this is not essential. Sample Projects ARA risks - building infrastructure and tooling for testing for these capabilities, and iterating with external ARA experts to scope possible tasks. This will involve building custom "testing environments" and new infrastructure. (Not currently hiring for) CBRN risks - working with external experts in the field of biosecurity to design clear and repeatable CBRN evaluations, based on a summary of dangerous biological capabilities. Using our post training infrastructure to prepare new generations of models for routine evaluations. (Not currently hiring for) Cyber risks - working with external cyber experts to co-design a set of clear and repeatable cyber evaluations. This is likely to involve building custom environments or additions onto existing tooling and infrastructure, or locating specialized datasets. Deadline to apply: None. Applications will be reviewed on a rolling basis.