Applicants must be eligible to work in the specified location
* Day Rate: £400 per day (Outside IR35)
* Location: Cambridge
* Contract: 6-month rolling
About the Role:
We are seeking an experienced AI Technical Manager to lead the design, development, and deployment of next-generation AI solutions. You will manage a team of engineers and data scientists, driving initiatives in LLMs, Generative AI, and cloud-native AI platforms. This role combines technical expertise with leadership skills to deliver innovative, scalable, and secure AI systems.
Required Skills & Experience:
Over 13 years of experience in software engineering with strong technical and managerial expertise.
Proven experience as a Technical Manager leading AI/ML Engineers and Data Scientists, with a strong track record of progression into management roles.
Strong programming expertise in Python (TensorFlow, PyTorch, scikit-learn, etc.).
Hands-on experience with LLMs (GPT, LLaMA, Claude, etc.) and Generative AI frameworks.
Proficiency in deploying cloud-based AI systems on AWS, Azure, or GCP.
Solid understanding of ML algorithms, model optimization, and MLOps practices.
Strong leadership, communication, and stakeholder management skills.
Bachelor's or Master's in Computer Science, Data Science, AI/ML, or a related field.
Key Responsibilities:
Lead and mentor a team of
AI/ML engineers and data scientists
.
Architect and implement AI solutions leveraging
LLMs and Generative AI
.
Drive end-to-end development of AI applications using
Python
and modern frameworks.
Design and deploy scalable solutions on
AWS, Azure, or GCP
.
Collaborate with cross-functional teams (engineering, product, business) to align AI solutions with business goals.
Ensure best practices in security, compliance, and performance for AI systems.
Stay ahead of emerging
AI/ML trends
and integrate them into product strategy.
Preferred Qualifications:
Experience with vector databases, retrieval-augmented generation (RAG), or prompt engineering.
Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines for ML.
Background in scaling AI services in production environments.