Join to apply for the AI Compiler Optimization Engineer (PAYE) role at Huawei Technologies Research & Development (UK) Ltd
About Huawei Research And Development UK Limited
Huawei is a leading global provider of information and communications technology (ICT) infrastructure and smart devices. We have 207,000 employees and operate in over 170 countries and regions, serving more than three billion people around the world.
Our vision and mission is to bring digital to every person, home and organization for a fully connected, intelligent world. To this end, we will drive ubiquitous connectivity and promote equal access to networks; bring cloud and artificial intelligence to all four corners of the earth to provide superior computing power where you need it, when you need it; build digital platforms to help all industries and organizations become more agile, efficient, and dynamic; redefine user experience with AI, making it more personalized for people in all aspects of their life, whether they’re at home, in the office, or on the go.
This spirit of innovation has led Huawei to work in close partnership with leading academic institutions in the UK to develop and refine the latest technologies. With a shared commitment to innovation and progress, both parties have worked together to achieve common goals and establish a strong partnership.
Huawei Research And Development UK Limited Overview
Huawei’s vision is a fully connected, intelligent world. To achieve this, we work to inspire passion for basic research around the world. Our combined passion drives development across the global innovation value chain.
Job Summary
We are seeking a skilled AI Compiler Optimization Engineer to optimize AI model inference performance through advanced compiler technologies. You will focus on performance tuning for CPU or hybrid CPU/XPU heterogeneous architectures, profiling AI frameworks to discover new optimization opportunities, and delivering cutting-edge insights from industry research.
Key Responsibilities:
* Compiler-Based Performance Optimization:
o Implement compiler techniques to enhance inference performance on CPU and CPU/XPU hybrid systems.
o Optimize JIT level compute graphs with operator fusion, memory allocation and etc. for latency/throughput improvements.
o Preferred: Experience with LLVM/MLIR development.
* AI Model Profiling & Framework Optimization:
o Profile end-to-end inference workflows on frameworks like TensorFlow, PyTorch, ONNX, and llama.cpp to identify hotspots and bottlenecks.
o Propose and implement optimization strategies.
o Preferred: Experience optimizing models on multiple AI frameworks.
* Research & Insight Development:
o Track and analyze the latest advancements in AI & compiler research.
o Produce actionable insight reports summarizing trends, benchmarks, and potential optimizations.
o Preferred: Strong technical writing skills with prior publications or reports.
Person Specification:
* Required:
o Proficiency in C/C++ and compiler infrastructure.
o Deep understanding of AI model architectures and inference workflows.
o Experience with performance profiling tools.
o Familiarity with CPU/XPU hardware architectures and optimization techniques.
o Strong analytical and problem-solving skills.
* Desired:
o BSc/MSc/MSci in CS
o Contributions to open-source compiler projects.
o Experience with heterogeneous computing.
o Published work or technical blogs on AI/compiler optimization topics.
o Good at self-learning, courageous to explore new things, strong in practical skill
o Good teamwork and communication skills in both Mandarin and English
What We Offer:
* Assign with an industry expert as Mentor
* Fixed term employment contract up to two years
* Flexible working
* 33 days annual leave entitlement per year
* Group Personal Pension
* Corporate retail discounts
* Employee Assistance Programme
* Life insurance
* Corporate social events
#J-18808-Ljbffr