We are looking for an AI security engineer to help our clients secure LLM systems, agents, and AI-powered products — from threat modelling and red teaming to designing practical guardrails and controls. You will help organisations move faster with AI without taking unnecessary risks, combining a strong security mindset with a pragmatic, product-aware approach. Role overview You will work with product, security, and engineering teams to understand how AI is used in their organisation and design controls that keep systems safe, reliable, and compliant. Engagements may range from focused assessments of new LLM integrations to ongoing work shaping a client’s AI security strategy, standards, and review processes. You will also bridge security concerns and product realities — helping teams determine where strong controls are essential, where lightweight mitigations are enough, and how to embed AI security into workflows without slowing progress. What you might work on Threat modelling LLM and agent workflows, including abuse cases and data leakage risks. Designing and testing guardrails for prompt injection, data exfiltration, and unsafe actions. Running AI-focused security reviews, red teaming exercises, and tabletop simulations. Working with engineers to implement mitigations in code, infrastructure, and processes. Helping teams establish guidelines and checklists for building AI features securely. About you Background in application, cloud, or product security, with an interest in AI systems. Hands-on familiarity with LLMs, agents, or related tooling — comfortable experimenting and reading docs or code. Practical mindset: you enjoy designing controls that teams can actually adopt. Clear communicator able to work with engineers and non-technical stakeholders and explain risk without hype. Keeps up with the evolving AI security landscape — new attacks, mitigations, and best practices.