Job Description
About the role:\n\nJoin a specialist machine learning team working at the intersection of deep learning, model optimisation, and efficient deployment. You will help build and deploy advanced ML models for low-latency speech recognition and foundation LLMs, focusing on reducing power consumption while maximising performance.\n\n Your work will include:\n\nTraining state-of-the-art models on production-scale datasets.\nCompressing and optimising models for accelerated inference on modern hardware.\nResearching and implementing innovative ML techniques tailored for efficient deployment.\nDeploying and maintaining customer-facing training libraries.Your initial focus will be on speech recognition models, where you will:\n\nOptimise training workflows for multi-GPU environments.\nManage and execute large-scale training runs.\nTune hyperparameters to improve both inference quality and performance.What you’ll be working on This is an end-to-end optimisation role, from algorithms through to deployment on modern silicon, with a mission to enable high-performance, low-power AI in production environments. You will work on deep technical challenges alongside engineers and researchers who care about efficiency, precision, and impact.\n\n What they're looking for:\nStrong practical experience in training deep learning models at scale.\nKnowledge of optimising ML workflows for multi-GPU environments.\nExperience with model compression, quantisation, and deployment for low-latency applications.\nFamiliarity with frameworks such as PyTorch, TensorFlow, or similar.\nAbility to tune models for real-world performance constraints.\nA collaborative mindset, able to contribute ideas and adapt to feedback in a small, high-trust team environment.Why join?\nWork on meaningful projects that contribute to reducing the energy footprint of global AI workloads.\nCollaborate in a friendly, multi-disciplinary team that values technical excellence, innovation, and open discussion.\nDevelop your skills by working on cutting-edge optimisation challenges with a clear path from research to deployment.\nEnjoy a collaborative on-site culture with shared meals, games, and a supportive team environment, while retaining flexibility for hybrid working