Led by a world-class faculty of scientists, technologists, policy makers, economists and entrepreneurs, the Ellison Institute of Technology aims to develop and deploy commercially sustainable solutions to solve some of humanity’s most enduring challenges. Our work is guided by four Humane Endeavours: Health, Medical Science & Generative Biology, Food Security & Sustainable Agriculture, Climate Change & Managing Atmospheric CO2 and Artificial Intelligence & Robotics. Set for completion in 2027, the EIT Campus in Littlemore will include more than 300,000 sq ft of research laboratories, educational and gathering spaces. Fuelled by growing ambition and the strength of Oxford’s science ecosystem, EIT is now expanding its footprint to a 2 million sq ft Campus across the western part of The Oxford Science Park. Designed by Foster Partners led by Lord Norman Foster, this will become a transformative workplace for up to 7,000 people, with autonomous laboratories, purpose-built laboratories including a plant sciences building and dynamic spaces to spark interdisciplinary collaboration. The Generative Biology Institute (GBI) at the Ellison Institute of Technology (EIT) aims to overcome two major challenges in making biology engineerable: 1) the ability to precisely synthesize entire genomes, and 2) understanding which DNA sequences will create biological systems that perform desired functions. Solving these challenges will unlock the potential of biology for transformative solutions in health, sustainability, agriculture, and more. GBI will house 30 groups and over 300 researchers, supported by cutting-edge facilities and sustained funding to address global challenges and advance biology engineering. EIT fosters a culture of collaboration, innovation, and resilience, valuing diverse expertise to drive sustainable solutions to humanity’s enduring challenges. The High-Performance Computing (HPC) Engineer within GBI will play a pivotal role in designing, building, and maintaining advanced computational infrastructure to accelerate biological and biomedical discovery and translational research. Working within the Scientific Computing Facility, the HPC Engineer will design, deploy, and optimise systems that enable large-scale data processing, AI-driven analytics, and simulation workloads across. For example deploying Kubernetes and Slurm to enable real-time data analysis from instruments, MLOps, or scientific workflow managers. We will be hiring either at the regular or senior level, depending on the applicant’s experience. At the regular level: This position requires technical expertise in HPC system architecture, coupled with the ability to collaborate closely with data scientists, bioinformaticians, and software engineers to ensure seamless, high-performance access to computing resources that support GBI’s research mission. At the senior level: This position requires deep technical expertise in HPC system architecture, coupled a proven track record to collaborate closely with data scientists, bioinformaticians, and software engineers to ensure seamless, high-performance access to computing resources that support GBI’s research mission. Key Responsibilities: Design, implement, and maintain scalable HPC infrastructure (cloud and on-prem) to support GBI’s computational research workloads. Evaluate and integrate advanced technologies including GPU/TPU acceleration, high-speed interconnects, and parallel file systems. Manage HPC environments, including Linux-based clusters, schedulers (e.g., Slurm), and high-performance storage systems (e.g., Lustre, BeeGFS, GPFS). Implement robust monitoring, fault-tolerance, and capacity management for high availability and reliability. Develop automation scripts and tools (Python, Bash, Ansible, Terraform, Go, Helm, etc) for provisioning, configuration, and scaling HPC resources. Support reproducible research through containerization (Singularity, Docker, etc), workflow orchestration (Nextflow, Kubernetes, OpenHPC, etc), and MLOps. Collaborate with researchers to address common bottlenecks in their scientific computing workflows. Provide technical support and guidance for job scheduling, workflow optimization, and performance tuning. Collaborate with information security teams to manage user access and protect sensitive research data. Additional responsibilities at the senior level: Work with the Head of Scientific Compute on long-term strategy and architecture for GBI’s computing platforms. Collaborate with researchers to understand present and future computational needs and translate them into cloud and HPC requirements and operational policy. Work with HPC and cloud vendors to ensure computational resources at GBI meet the needs of its researchers Requirements Essential Knowledge, Skills and Experience: Bachelor’s or Master’s degree in Computer Science, Computational Biology, Engineering, or related discipline (PhD desirable). 3 years (5 years at the senior level) of relevant experience managing HPC systems in research, biological and biomedical, or academic environment. Ability to work collaboratively with multidisciplinary research teams and translate computational needs into technical solutions. Excellent communication and documentation skills for both technical and non-technical audiences. Technical Expertise At the regular level Extensive experience using HPC clusters (or cloud computing) in scientific or research settings. Proficiency in Linux system administration, networking, and parallel computing (MPI, OpenMP, CUDA, or ROCm). Experience with using HPC job schedulers (Slurm preferred) and parallel file systems (Lustre, BeeGFS, GPFS). At the senior level: Extensive experience designing, deploying, and managing HPC clusters (or cloud computing) in scientific or research settings. Strong proficiency in Linux system administration, networking, and parallel computing (MPI, OpenMP, CUDA, or ROCm). Extensive expertise with administering HPC job schedulers (Slurm preferred) and parallel file systems (Lustre, BeeGFS, GPFS). At all levels: Familiarity with containerization, workflow automation, and orchestration tools used in bioinformatics and AI/ML. Skilled in scripting and automation using Python, Bash, and configuration management tools (Ansible, Terraform). Demonstrated experience profiling and optimizing scientific or machine learning workloads on large-scale clusters. Understanding of distributed computing frameworks and GPU-based acceleration techniques. Benefits We offer the following benefits: Enhanced holiday pay Pension Life Assurance Income Protection Private Medical Insurance Hospital Cash Plan Therapy Services Perk Box Electrical Car Scheme Why work for EIT: At the Ellison Institute, we believe a collaborative, inclusive team is key to our success. We are building a supportive environment where creative risks are encouraged, and everyone feels heard. Valuing emotional intelligence, empathy, respect, and resilience, we encourage people to be curious and to have a shared commitment to excellence. Join us and make an impact! Terms of Appointment: Applicants must have the right to work in the United Kingdom. Due to the highly specialised technical nature of the role, exceptional international applicants may be considered for sponsorship where appropriate. You must be based in, or within easy commuting distance of, Oxford. During peak periods, some longer hours may be required and some working across multiple time zones due to the global nature of the programme.