We’re partnering with a high-growth cloud-native software company that has recently raised a $200M Series C and is expanding its AI-focused platform. This is an opportunity to join a well-funded, fast-scaling organisation building technology at the core of modern AI infrastructure.
Our client develops an enterprise platform that simplifies how organisations deploy, manage, and operate AI/ML workloads across Kubernetes environments. The product is designed for performance-critical AI use cases, enabling customers to run GPU-accelerated workloads at scale with reliability and efficiency.
About the Role
The AI Technical Support Engineer will support customers running complex AI workloads in production. You’ll act as a technical expert across AI infrastructure, working closely with customers and internal engineering teams to troubleshoot issues related to GPU performance, networking, and distributed systems.
Key focus areas include:
* AI infrastructure on Kubernetes
* NVIDIA GPUs and GPU scheduling
* GPU interconnects and high-performance networking
* Understanding why high-bandwidth, low-latency interconnects (e.g. InfiniBand) are critical for scalable AI workloads
Requirements
* Experience supporting AI, HPC, or GPU-accelerated environments
* Strong knowledge of NVIDIA GPUs
* Understanding of GPU interconnects and networking in multi-GPU or multi-node setups
* Kubernetes experience in production environments
* Comfortable working in a customer-facing technical role
Salary
£120,000 – £150,000 base salary + benefits