Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Ai researcher

London
Lorien Resourcing
Posted: 2 February
Offer description

AI Researcher (Guardrails & Responsible AI)

Hybrid Working - Edinburgh OR London - 2 days a week on site.

Financial Services

Lorien's leading banking client is looking for an AI Researcher-a curious, high‑end thinker (ideal for a recent Master's or PhD graduate) who is passionate about responsible AI, agentic systems, and the science behind guardrail effectiveness. This role sits at the intersection of research, model development, and deep validation, contributing to safety frameworks that directly shape the bank's AI strategy.

The client is advancing its next generation of AI capabilities-and we're committed to making them safe, explainable, and trusted. We're building cutting‑edge guardrail technologies to ensure AI behaves reliably across text, voice, and even emerging multimodal systems.

This role is based in Edinburgh OR London.

This role will be Via Umbrella.

Working in a Hybrid Model of 2 days a week on site.

What You'll Do

As an AI Research Engineer, you will be focused on designing and building AI and Generative AI guardrails to help the safe development and deployment of cutting-edge multi-modal AI Systems and help productization of these technologies.

1. Investigate cutting‑edge methods in AI safety, guardrails, alignment, agentic behaviour, and safe model interaction patterns.
2. Conduct research into:Unintended behaviours and emergent risksMultimodal model vulnerabilitiesRobustness, uncertainty, and adversarial resilienceInterpretability and explanation techniques
3. Explore state‑of‑the‑art methods across LLMs, vision-language models, speech models, and emerging agent systems.
4. Monitor research trends, benchmarks, and global developments in AI governance, AI risk, and safety engineering.
5. Develop prototype safety mechanisms, guardrails, and evaluation tools across text, audio, and video modalities.
6. Build and test:Prompt‑level guardrails.Safety classifiersBehaviour‑shaping or reward‑modelling components.LLM and multimodal fine‑tunesAdversarial robustness defences
7. Use Python and modern ML frameworks (., PyTorch, TensorFlow, JAX, HuggingFace).
8. Contribute to creation of synthetic datasets, adversarial evaluation corpora, and scenario‑based test sets.
9. Help transition research outputs into scalable controls for engineering teams to integrate.
10. Design, run, and document high‑depth validation experiments to measure guardrail effectiveness.
11. Conduct multimodal red‑teaming, stress testing, and failure‑mode exploration.
12. Build automated testing and model evaluation pipelines:
13. Safety benchmarks (toxicity, bias, hallucination, jailbreak susceptibility)
14. Multimodal evaluation (vision consistency, audio hallucination, cross‑modal attacks)
15. Scoring and calibration analysis
16. Support development of model risk metrics and safety dashboards.
17. Apply frameworks such as HELM, Holistic Evaluation, or bespoke NatWest‑specific evaluation patterns.
18. Help validate controls that ensure AI systems meet NatWest's responsible AI standards.
19. Work closely with engineers, safety SMEs, and governance teams.
20. Produce high‑quality research insights to guide product and platform direction.

Key Skills and Experience

21. Strong Python programming skills and foundations in machine learning, LLMs, or multimodal AI.
22. Understanding of ML concepts such as training, fine‑tuning, optimisation, evaluation, and model drift.
23. Experience building or adapting ML models (open‑source or proprietary).
24. Ability to design structured experiments and interpret model behaviour through metrics and analysis.
25. Curiosity for emerging topics in AI alignment, agent behaviour, safety engineering, and interpretability.
26. Good grasp of core Responsible AI concepts:Bias and fairnessExplainabilityPrivacy‑preserving ML.Robustness and uncertainty

Nice to have:

27. Experience with ML frameworks: PyTorch, TensorFlow, Flax/JAX, HuggingFace.
28. Exposure to multimodal models (CLIP, Whisper, LLaVA, video transformers).
29. Familiarity with safety benchmarks, adversarial testing, red teaming, or uncertainty estimation.
30. Knowledge of AI governance, risk frameworks, or industry standards (., NIST AI RMF, ISO/IEC 42001).
31. Experience with synthetic data generation or test corpus construction.
32. Familiarity with experiment tracking tools (Comet/Opik,, MLflow, SageMaker Experiments).
33. Interest in governance, risk, or AI assurance

IND_PC3

Guidant, Carbon60, Lorien & SRG - The Impellam Group Portfolio are acting as an Employment Business in relation to this vacancy.

Apply
Create E-mail Alert
Job alert activated
Saved
Save
Similar job
Product design/engineer - inside ir35 - news publication
London
Lorien Resourcing
Product design engineer
£650 a day
Similar job
It support engineer - central london hybrid - 35k plus bonus
London
Permanent
Lorien Resourcing
It support engineer
£30,000 - £35,000 a year
Similar job
Recruitment business partner
London
Permanent
Lorien Resourcing
See more jobs
Similar jobs
Lorien Resourcing recruitment
Lorien Resourcing jobs in London
jobs London
jobs Greater London
jobs England
Home > Jobs > AI Researcher

About Jobijoba

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by Job Title
  • Jobs by Industry
  • Jobs by Company
  • Jobs by Location
  • Jobs by Keywords

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2026 Jobijoba - All Rights Reserved

Apply
Create E-mail Alert
Job alert activated
Saved
Save