Superb opportunity to join a leading financial services client with offices in Sheffield. This contract is inside IR35 and we need someone willing to be in the office up to 3-days per week.
The Role
We’re looking for an experienced AI Security Subject Matter Expert (SME) to shape and lead our approach to securing AI/ML models, GenAI applications, and AI-driven agents.
This is a high-impact role sitting at the intersection of Security Engineering, DevSecOps, and AI/ML. You’ll work closely with engineering, platform, risk, and product teams to ensure AI systems are designed, deployed, and operated securely — from development through to runtime.
Responsibilities:
* Embed AI security controls into DevSecOps pipelines (model build, training, deployment, runtime)
* Define secure design patterns for AI/ML models, GenAI apps, and autonomous agents
* Evaluate and integrate AI/ML security tooling (model scanning, vulnerability detection, runtime guardrails)
* Identify and manage AI-specific risks such as:
* Model poisoning, tampering, theft
* Prompt injection and jailbreaks
* Unsafe model formats and supply-chain risks
* Data leakage, hallucinations, and misuse
* Translate emerging threats, CVEs, and research into practical enterprise controls
* Provide guidance, standards, and best practices for secure AI development
* Influence architecture decisions and act as a trusted advisor across multiple AI initiatives
What We’re Looking For:
* 5+ years in cybersecurity (cloud, application, or platform security)
* Strong experience working in DevSecOps environments
* Solid understanding of AI/ML fundamentals and model lifecycles
* Practical knowledge of GenAI and LLMs
* Hands-on exposure to AI-enabled or ML-based applications
* Familiarity with:
* Model formats (ONNX, Pickle, Torch, etc.)
* CI/CD and pipeline-based security controls
* Secure software development practices
* Ability to clearly communicate complex AI security concepts to technical and non-technical audiences
* Confident engaging senior stakeholders and influencing without direct authority
Desirable:
* Experience in regulated environments
* Exposure to AI governance or model risk management
* Background in security architecture
* Familiarity with frameworks such as OWASP for AI or NIST
More details available on successful application