We are looking for an infrastructure engineer to design, build, and operate the platforms that production AI workloads depend on — from cloud infrastructure and networking to CI/CD and observability. You will help clients create stable, secure foundations for AI systems, so that engineers can iterate quickly without compromising reliability or cost. Role overview You will support our clients in building reliable, secure infrastructure for AI products and internal tools, working across cloud platforms, deployment pipelines, and runtime environments. Projects might involve designing a new environment for an AI product, simplifying an overly complex platform, or improving visibility and incident response for existing services. You will often join when a team knows their platform is "a bit of a mess" or hard to evolve, and you will help them move towards a simpler, more standardised setup that is easier to reason about, operate, and secure. What you might work on Designing and operating infrastructure on AWS, GCP, or Azure for AI and data-heavy workloads. Improving CI/CD pipelines, release processes, and deployment strategies. Setting up monitoring, logging, and alerting for services that integrate LLMs and other AI components. Collaborating with application engineers to make systems more reliable, secure, and observable. Helping teams standardise patterns for new services so they are secure, observable, and easy to operate from day one. About you Experience with modern cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code. Comfortable with containers (Docker, Kubernetes or similar) and CI/CD tooling. Enjoys diagnosing tricky production issues across infrastructure and application layers. Security-conscious approach to configuration, access, and data handling, with a bias towards least privilege and defence in depth. Happy collaborating with both platform and application engineers to land pragmatic solutions.