Location: Hybrid (Birmingham, or London)
Permanent
We are TXP. We help businesses and organisations move forward, at pace and at scale. We believe in the transformative power of combining technology and people. By providing consulting expertise, development services and resourcing, we work closely with organisations to solve their most complex business problems.
Our work transforms organisations – and we take that responsibility seriously. We focus on success, pursue excellence and take ownership of everything we do.
But achieving that level of performance requires an inclusive and supportive working environment. We believe in the power of technology and people, and we help everyone here to succeed. At TXP, you can multiply your potential.
Role Purpose
The Machine Learning Engineer is a client-facing delivery role responsible for taking machine learning models from prototype to production. Operating within a consulting model, this role bridges data science and platform engineering, ensuring that models developed during engagements are deployed as reliable, scalable and maintainable services. The role owns the full productionisation lifecycle, from environment configuration and pipeline orchestration through to performance monitoring and scaling. Close collaboration with data scientists is fundamental, translating their analytical work into robust systems that deliver sustained business value.
Key Responsibilities
Model Productionisation
Take machine learning models developed by data scientists and re-engineer them for production deployment.
Refactor prototype code into clean, modular, tested Python packages with clear separation of concerns.
Implement inference pipelines that handle data validation, preprocessing, prediction and post-processing as a single deployable unit.
Ensure models meet non-functional requirements including latency, throughput, reliability and resource efficiency before release.
Manage the transition from notebook-based experimentation to production-grade services with minimal loss of analytical intent.
Environment Tuning and Scaling
Configure and optimise compute environments on Azure AI Foundry, including managed endpoints, compute clusters and containerised deployments.
Right-size infrastructure for model serving workloads, balancing cost against performance and availability requirements.
Implement auto-scaling strategies for inference endpoints to handle variable demand patterns.
Tune runtime configurations including batch sizes, concurrency settings, memory allocation and GPU utilisation where applicable.
Conduct load testing and performance benchmarking to validate deployment readiness under expected and peak conditions.
Code and Pipeline Management
Design and maintain CI/CD pipelines for model training, validation and deployment using Azure DevOps or GitHub Actions.
Implement automated model retraining pipelines triggered by schedule, data drift or performance degradation.
Manage model versioning, artefact storage and promotion workflows through Azure AI Foundry model registry.
Enforce code quality standards through automated linting, unit testing, integration testing and code review processes.
Maintain infrastructure-as-code definitions for deployment environments using Terraform, Bicep or equivalent tooling.
Model Monitoring and Operations
Implement monitoring for deployed models covering prediction drift, data drift, feature distribution shifts and performance degradation.
Build alerting and escalation workflows that trigger investigation or automated retraining when thresholds are breached.
Maintain logging and observability across inference pipelines to support debugging, audit and compliance requirements.
Produce operational runbooks and incident response procedures for model services.
Track model performance against business metrics, not just statistical metrics, to ensure continued value delivery.
Collaboration with Data Scientists
Work alongside data scientists from early in the model development lifecycle to ensure production readiness is designed in, not retrofitted.
Provide guidance on coding standards, testing practices and architectural patterns that accelerate the path from prototype to deployment.
Review data science code for production suitability, identifying scalability risks, dependency issues and maintainability concerns.
Jointly define model interfaces, input/output schemas and contract testing approaches to decouple model development from serving infrastructure.
Facilitate structured handover processes that capture model assumptions, training procedures, known limitations and retraining requirements.
Client Delivery
Operate within consulting delivery frameworks, managing scope, timelines and stakeholder expectations.
Contribute to estimation, solution architecture and proposal development for MLOps and model deployment workstreams.
Present deployment architectures, operational plans and trade-off analyses to client technical and business stakeholders.
Conduct knowledge transfer sessions and produce handover documentation for client engineering teams.
Identify opportunities to improve client ML maturity through better tooling, processes and automation.
Required Skills and Experience
Strong software engineering skills in Python, with experience writing production-grade code including packaging, testing and documentation.
Demonstrable experience productionising machine learning models, taking them from research or prototype stage into live, monitored services.
Proficiency with CI/CD tooling for ML pipelines, including Azure DevOps, GitHub Actions or equivalent.
Working knowledge of Microsoft Azure AI Foundry, including managed endpoints, model registry, compute management and experiment tracking.
Experience with containerisation (Docker) and container orchestration for model serving.
Understanding of ML monitoring practices including data drift detection, prediction drift and model performance tracking.
Familiarity with infrastructure-as-code tools such as Terraform or Bicep for Azure resource provisioning.
Experience with performance tuning, load testing and capacity planning for inference workloads.
Strong collaboration skills, with proven ability to work effectively alongside data scientists to bridge the gap between experimentation and production.
Experience working in a consulting, professional services or client-facing delivery environment.
Desirable Skills
Experience with Microsoft Fabric for upstream data pipeline integration and feature store patterns.
Familiarity with Kubernetes (AKS) for model serving at scale.
Exposure to GPU-accelerated inference and optimisation techniques such as ONNX, TensorRT or model quantisation.
Experience with feature stores, experiment tracking platforms or ML metadata management tools.
Knowledge of A/B testing, canary deployments or shadow mode strategies for safe model rollout.
Relevant certifications such as DP-100 (Azure Data Scientist Associate), AI-102 (Azure AI Engineer) or equivalent.
Benefits:
25 days annual leave (plus bank holidays).
An additional day of paid leave for your birthday (or Christmas eve).
Salary sacrifice, matched employer contributed pension (4%).
Life assurance (3x).
Access to an Employee Assistance Programme (EAP).
Private medical insurance through our partner Aviva.
Cycle to work scheme.
Corporate eye-care vouchers.
Access to an independent financial advisor.
2 x social value days per year to give back to local communities.
Grow with us:
Work on exciting new projects.
If you want to avoid getting stuck with the mundane, you're in the right place. We work in many sectors with fantastic clients, so you'll always be working on something exciting and challenging.
Career growth – we've got you
We recognise that you might have a career path planned out and you might need some support to help you move forward. We're here to support you and make the most out of your time with us, through challenging work, opportunities to grow and learning and development opportunities.
Be part of the TXP growth journey.
We are a high growth, fast paced environment. We currently have 200+ employees and work with clients across the UK. Joining TXP means you'll be part of that.