Social network you want to login/join with:
Senior Inference Engineer, AGI Inference, AGI Inference, Cambridge
col-narrow-left
Client:
Amazon UK Services Ltd.
Location:
Cambridge, United Kingdom
Job Category:
Other
-
EU work permit required:
Yes
col-narrow-right
Job Reference:
3d1cac9cf6a0
Job Views:
4
Posted:
22.06.2025
Expiry Date:
06.08.2025
col-wide
Job Description:
The Inference team at AGI is a group of innovative developers working on ground-breaking multi-modal inference solutions that revolutionize how AI systems perceive and interact with the world. We push the limits of inference performance to provide the best possible experience for our users across a wide range of applications and devices. We are looking for talented, passionate, and dedicated Inference Engineers to join our team and build innovative, mission-critical, high-volume production systems that will shape the future of AI. You will have an enormous opportunity to make an impact on the design, architecture, and implementation of cuting-edge technologies used every day, potentially by people you know.
Key job responsibilities
Drive the technical strategy and roadmap for inference optimizations across AGI
• Develop high-performance inference software for a diverse set of neural models, typically in C/C++
• Optimize inference performance across various platforms (on-device, cloud-based CPU, GPU, proprietary ASICs)
• Collaborate closely with research scientists to bring next-generation neural models to life
• Partner with internal and external hardware teams to maximize platform utilization
• Work in an Agile environment to deliver high-quality software against tight schedules
• Mentor and grow technical talent
BASIC QUALIFICATIONS
- 5+ years of non-internship professional software development experience
- 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- 5+ years of programming with at least one software programming language experience
- Experience as a mentor, tech lead or leading an engineering team
- Bachelor's degree in Computer Science, Computer Engineering, or related field
- 2+ years of experience optimizing neural models
- Deep expertise in C/C++ and low-level system optimization
- Proven track record of leading large-scale technical initiatives
- Solid understanding of deep learning architectures (CNNs, RNNs, Transformers, etc.)
- Experience with inference frameworks (PyTorch, TensorFlow, ONNXRuntime, TensorRT, LLaMA.cpp, etc.)
- Strong communication skills and ability to work in a collaborative environment
PREFERRED QUALIFICATIONS
- Proficiency in kernel programming for accelerated hardware
- Experience with latency-sensitive optimizations and real-time inference
- Understanding of resource constraints on mobile/edge hardware
- Knowledge of model compression techniques (quantization, pruning, distillation, etc.)
- Experience with LLM efficiency techniques like speculative decoding and long context
#J-18808-Ljbffr