Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Staff/principal security engineer, trust

London
AI Security Institute
Security engineer
Posted: 5 January
Offer description

About the AI Security Institute


The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.

We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

About the Team:

Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product.

Secured by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership.

What you might work on:

* Build a continuous assurance platform that validates controls programmatically and generates evidence automatically
* Design policy-as-code pipelines that translate regulatory requirements (GovAssure, CAF) into testable assertions
* Create integration points that pull evidence from infrastructure, CI/CD, and ML pipelines without manual intervention
* Automate evidence collection for AI-specific artefacts: model documentation, evaluation results, release gates, weights handling
* Build dashboards and reporting that give real-time visibility into control status
* Develop tooling that makes compliance invisible to researchers while giving assurance teams what they need
* Work with platform and security engineers to embed controls at the infrastructure layer
* Threat model evaluation pipelines and fix compliance gaps at the platform layer rather than through manual process
* Implement controls at the application layer when that's the right place to solve the problem

Role Summary

Build and own AISI's risk platform. Automate systems that validate security controls, measure risk posture, and generate assurance evidence. We operate in a regulated government environment, but the goal is real security with auditable outputs, not checkbox compliance.

This is a hands-on engineering role. If you're looking for a traditional GRC analyst position, this isn't it.

Responsibilities:

* Design and build continuous control validation and evidence pipelines
* Translate regulatory frameworks into programmatic controls and machine-checkable artefacts
* Automate evidence collection across cloud infrastructure, CI/CD, and ML workflows
* Integrate AI safety artefacts (model documentation, evaluations, red-team results, release gates) into compliance pipelines
* Implement controls for model handling, compute governance, and third-party model/API usage
* Work with platform, security, and research teams to embed controls at the infrastructure layer
* Engage governance and assurance teams, understand what they actually need, then build it
* Support readiness for AI governance standards (NIST AI RMF, ISO/IEC through automation

Profile requirements:

Must have:

* Staff-level engineering experience (platform, infrastructure, application, or security)
* Strong programming skills (Python, Go, or similar)
* Comfortable with IaC, CI/CD pipelines, and cloud platforms (AWS preferred)
* Experience working in a regulated or security-conscious environment
* Interest in treating compliance as an engineering problem

Useful but teachable:

* Direct experience with compliance frameworks (GovAssure, CAF, SOC2, ISO 27001, FedRAMP - any transfer)
* Familiarity with policy-as-code tools (Open Policy Agent, Conftest, Checkov)
* Exposure to MLOps tooling (MLflow, Weights & Biases, model registries)
* Understanding of AI/ML workflows and artefacts

We'll teach you the specific frameworks, government stakeholder dynamics, and frontier AI risk specifics.

Key Competencies

* Building automated evidence and control validation systems
* Translating policy requirements into code and machine-checkable artefacts
* Designing pipelines that integrate with cloud infrastructure, CI/CD, and ML tooling
* Working across engineering and governance teams
* Taking ambiguous requirements and turning them into working systems

If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is the place to do it.

What We Offer

Impact you couldn't have anywhere else

* Incredibly talented, mission-driven and supportive colleagues.
* Direct influence on how frontier AI is governed and deployed globally.
* Work with the Prime Minister's AI Advisor and leading AI companies.
* Opportunity to shape the first & best-resourced public-interest research team focused on AI security.

Resources & access

* Pre-release access to multiple frontier models and ample compute.
* Extensive operational support so you can focus on research and ship quickly.
* Work with experts across national security, policy, AI research and adjacent sciences.

Growth & autonomy

* If you're talented and driven, you'll own important problems early.
* 5 days off learning and development, annual stipends for learning and development and funding for conferences and external collaborations.
* Freedom to pursue research bets without product pressure.
* Opportunities to publish and collaborate externally.

Life & family

* Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol.
* Hybrid working, flexibility for occasional remote work abroad and stipends for work-from-home equipment.
* At least 25 days' annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering.
* Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
* On top of your salary, we contribute 28.97% of your base salary to your pension.
* Discounts and benefits for cycling to work, donations and retail/gyms.

*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.

Salary

Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top.

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

The full range of salaries are as follows:

* Level 3: £65,000–£75,000 (Base £35,720 + Technical Allowance £29,280–£39,280)
* Level 4: £85,000–£95,000 (Base £42,495 + Technical Allowance £42,505–£52,505)
* Level 5: £105,000–£115,000 (Base £55,805 + Technical Allowance £49,195–£59,195)
* Level 6: £125,000–£135,000 (Base £68,770 + Technical Allowance £56,230–£66,230)
* Level 7: £145,000 (Base £68,770 + Technical Allowance £76,230)
-----------------------------------


Additional Information




Internal Fraud Database


The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.


Security


Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.





Nationality requirements


We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).




Working for the Civil Service



The Civil Service Code (opens in a new window) sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window). The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.



Diversity and Inclusion


The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan (opens in a new window) and the Civil Service Diversity and Inclusion Strategy (opens in a new window).

Apply
Create E-mail Alert
Job alert activated
Saved
Save
Similar job
Senior application security engineer
London
Nextech Group Limited
Security engineer
£70,000 a year
Similar job
Aws security engineer
London
Apsley Recruitment Ltd
Security engineer
£500 - £550 a day
Similar job
Senior security engineer cloud saas
London
Client Server
Security engineer
£100,000 a year
See more jobs
Similar jobs
Travel jobs in London
jobs London
jobs Greater London
jobs England
Home > Jobs > Travel jobs > Security engineer jobs > Security engineer jobs in London > Staff/Principal Security Engineer, Trust

About Jobijoba

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by Job Title
  • Jobs by Industry
  • Jobs by Company
  • Jobs by Location
  • Jobs by Keywords

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2026 Jobijoba - All Rights Reserved

Apply
Create E-mail Alert
Job alert activated
Saved
Save