We are looking for an experienced AI engineer to help our clients design, build, and operate production-grade AI systems — from agents and automation to the infrastructure and security they depend on. You will be part of a small, senior team that moves between strategy and implementation. On a typical week you might pair with a client team on an LLM-powered feature, shape an AI architecture or evaluation approach, and drop into Python, TypeScript, or SQL to ship, test, debug, and productionise it. You will often be the person who figures things out when something unusual happens in production or when an AI idea needs to move from prototype to something reliable. You will also influence what we work on, how projects are structured, and how our internal tooling evolves over time. What you might work on Designing and implementing AI agents and autonomous workflows. Building secure LLM integrations, tooling, and internal APIs. Developing and hardening RAG pipelines and data access controls. Improving observability, evaluation, and safety guardrails for AI systems. Modernising infrastructure, CI/CD, and deployment workflows. Helping debug complex production issues across model behaviour, data, and infrastructure. About you Strong software engineering background with production Python or similar. Hands-on experience with LLMs, vector stores, orchestration, or agents. Comfortable across infrastructure, APIs, cloud, containers, and CI/CD. Security-minded and able to consider failure modes and guardrails. Clear communicator, confident working directly with clients. Enjoys autonomy and working in a small senior team. Technologies we often touch Python, TypeScript OpenAI, Anthropic, open-source LLMs RAG pipelines, pgvector, Pinecone AWS, GCP, Kubernetes, CI/CD Monitoring, logging, and security tooling