We are looking to onboard two AI Validation Specialists to support the Model Validation / AI Risk & Oversight team at a large UK bank on a contract basis. The roles are primarily aligned to London or Edinburgh with hybrid working, however we can consider strong candidates in other locations within the UK.
This is an initial 6-month contract with potential for extension and an immediate requirement targeting a January start, therefore we are looking for individuals who can start at relatively short notice.
Background and Role
The bank is scaling the use of AI-enabled solutions across operational efficiency and customer-facing journeys. This includes a growing pipeline of model validations, including early agentic AI validations, alongside use cases such as complaints automation and enhancements to customer-facing chatbots.
To alleviate near-term pressure on the validation function and accelerate delivery through H1 2026, we are seeking individuals with strong Python capability and proven experience working in an AWS environment, combined with a sound, pragmatic approach to AI model risk management.
The focus is not on hiring AI research gurus, but on bringing in practical validators who can assess what can go wrong, apply proportionate risk management, and work effectively with stakeholders to ensure appropriate guardrails, testing, and ongoing monitoring are in place (including use of existing internal tooling for areas such as toxicity and bias testing).
The roles will sit within AI Risk & Oversight within the wider Model Validation team.
Key Responsibilities
Deliver assigned AI/ML model validations end-to-end, producing clear conclusions and evidence aligned to internal governance expectations.
Assess and challenge model design, data, controls, implementation approach, and monitoring plans proportionately to the use case risk.
Evaluate and evidence testing/controls for key AI risks, including (as relevant):
bias/fairness considerations
toxicity / harmful content controls
hallucination and reliability risks
cyber / data leakage and data loss risks
Review and strengthen ongoing monitoring plans (metrics, thresholds, alerting, exception handling, governance cadence).
Work closely with developers, product owners, and risk stakeholders to agree remediation actions and ensure validation outcomes are actionable.
Contribute to repeatable validation approaches and artefacts, leveraging and improving existing internal tooling where appropriate.
Key Experience Required
Python proficiency, with ability to demonstrate capability in a technical interview and/or practical test.
Proven experience working in an AWS environment (hands-on, not just awareness).
2-5 years+ relevant experience (flexible based on strength of skillset)
Solid understanding of model risk management fundamentals and how to apply them pragmatically to AI use cases.
Awareness of common AI failure modes and risks (e.g., hallucinations, bias, toxicity, privacy/cyber/data leakage).
Strong stakeholder management skills: ability to challenge constructively, influence outcomes, and translate technical findings into clear risk-based conclusions.
Desirable
Familiarity with AI governance expectations and emerging regulation (e.g., EU AI Act awareness).
Experience validating or assuring customer-facing AI use cases (e.g., chatbots, automated decision support, content generation).
Exposure to validation/assurance of agentic or workflow-embedded AI solutions.
TPBN1_UKTJ