£90,000 - £110,000 + bonus & exceptional benefits
About the Company
Build production-grade LLM and GenAI solutions inside one of the UK's fastest-growing enterprise AI teams. This is a hands-on engineering role with a multi-award-winning UK consultancy whose AI practice has scaled rapidly through strategic acquisition and a strong delivery pipeline across commercial, central government and defence clients.
About the Role
You'll be designing, building and shipping enterprise GenAI systems in Python.
Responsibilities
* Designing and building production LLM and GenAI solutions in Python — from prototype through architecture to deployment.
* Implementing advanced RAG pipelines — hybrid search, re-ranking, query rewriting, context compression, evaluation and grounding.
* Working with vector stores such as Azure AI Search, Pinecone, Qdrant, Weaviate and Chroma, and tuning embedding models for enterprise data.
* Building agentic systems using frameworks like LangGraph, Semantic Kernel, LangChain, LlamaIndex and AutoGen — tool use, function calling, multi-step reasoning and orchestration.
* Integrating foundation models via Azure OpenAI, AWS Bedrock and open-weight model endpoints (Llama, Mistral, Claude, GPT family).
* Applying prompt engineering, guardrails, evaluation and observability — using tooling such as LangSmith, Langfuse, Ragas, promptfoo and Azure AI Content Safety.
* Productionising solutions with FastAPI, async Python, Pydantic, Docker and Kubernetes, deployed into Azure-native infrastructure.
* Partnering with data engineers, solution architects and advisory consultants on end-to-end AI programmes.
* Engaging directly with senior client stakeholders to shape solution direction.
Required Skills
* Proven Python engineering background — clean, tested, production-ready code.
* Hands-on experience building and deploying LLM / GenAI solutions in production — not just POCs or notebooks.
* Strong working knowledge of RAG patterns, agentic frameworks, vector databases and embedding models.
* Familiarity with prompt engineering, LLM evaluation and guardrail design (hallucination mitigation, prompt injection defence, content safety).
* Experience with cloud-native deployment — Azure preferred, with Docker, Kubernetes and CI/CD.
* Solid grasp of LLMOps — versioning, tracing, monitoring, cost management and responsible AI.
* Consultative, client-ready communication — comfortable translating ambiguous business problems into AI solutions.
Preferred Skills
* Experience with cloud-native deployment — Azure preferred, with Docker, Kubernetes and CI/CD.
* Solid grasp of LLMOps — versioning, tracing, monitoring, cost management and responsible AI.
Pay range and compensation package
Competitive compensation package with strong base salary and bonus opportunities. Generous and growing annual leave, including loyalty days and birthday leave (up to 29 days total). Comprehensive health support, including private medical insurance and 24/7 wellbeing assistance. Family-first policies with enhanced maternity, adoption, and paternity leave. Flexible, trust-based working environment designed to support work–life balance. Ongoing investment in people through training, coaching, wellbeing support, and lifestyle perks.
Equal Opportunity Statement
Bright Purple is an equal opportunities employer: we are proud to work with clients who share our values of diversity and inclusion in our industry.