High-Performance Computing Engineer (All Levels) – Generative Biology Institute
Led by a world‑class faculty of scientists, technologists, policy makers, economists and entrepreneurs, the Ellison Institute of Technology (EIT) aims to develop and deploy commercially sustainable solutions to solve some of humanity's most enduring challenges. Guided by four humane endeavours – Health, Medical Science & Generative Biology; Food Security & Sustainable Agriculture; Climate Change & Managing Atmospheric CO₂; and Artificial Intelligence & Robotics – EIT’s campus in Littlemore will span more than 300,000 sq ft of research laboratories and educational spaces, designed by Foster + Partners.
The Generative Biology Institute (GBI) at EIT seeks to overcome two major challenges in making biology engineerable: (1) the ability to precisely synthesize entire genomes, and (2) understanding which DNA sequences create biological systems that perform desired functions. GBI will house 30 groups and over 300 researchers, supported by cutting‑edge facilities and sustained funding.
As an HPC Engineer within GBI, you will design, build and maintain advanced computational infrastructure to accelerate biological and biomedical discovery and translational research. Working within the Scientific Computing Facility, you will deploy and optimise systems that support large‑scale data processing, AI‑driven analytics and simulation workloads – for example, Kubernetes, Slurm, MLOps, and scientific workflow managers.
Key Responsibilities
* Design, implement and maintain scalable HPC infrastructure (cloud and on‑prem) to support GBI’s computational research workloads
* Evaluate and integrate advanced technologies, including GPU/TPU acceleration, high‑speed interconnects and parallel file systems
* Manage HPC environments, including Linux‑based clusters, schedulers (e.g., Slurm) and high‑performance storage systems (Lustre, BeeGFS, GPFS)
* Implement robust monitoring, fault‑tolerance and capacity management for high availability and reliability
* Develop automation scripts and tools (Python, Bash, Ansible, Terraform, Go, Helm) for provisioning, configuration and scaling HPC resources
* Support reproducible research through containerisation (Singularity, Docker), workflow orchestration (Nextflow, Kubernetes, OpenHPC) and MLOps
* Collaborate with researchers to address bottlenecks in scientific computing workflows
* Provide technical support and guidance for job scheduling, workflow optimisation and performance tuning
* Work with information security teams to manage user access and protect sensitive research data
* Senior‑level additional responsibilities: work with the Head of Scientific Compute on long‑term strategy and architecture; translate future computational needs into cloud and HPC requirements and operational policy; collaborate with HPC and cloud vendors to meet researcher needs
Essential Knowledge, Skills and Experience
* Bachelor’s or Master’s degree in Computer Science, Computational Biology, Engineering or related discipline (PhD desirable)
* 3+ years (5+ years for senior level) of HPC system experience in research, biomedical or academic environments
* Ability to work collaboratively with multidisciplinary research teams and translate computational needs into technical solutions
* Excellent communication and documentation skills for both technical and non‑technical audiences
Technical Expertise – Regular Level
* Extensive experience using HPC clusters (or cloud computing) in scientific or research settings
* Proficiency in Linux system administration, networking and parallel computing (MPI, OpenMP, CUDA or ROCm)
* Experience with HPC job schedulers (Slurm preferred) and parallel file systems (Lustre, BeeGFS, GPFS)
Technical Expertise – Senior Level
* Extensive experience designing, deploying and managing HPC clusters (or cloud computing) in scientific or research settings
* Strong proficiency in Linux system administration, networking and parallel computing (MPI, OpenMP, CUDA or ROCm)
* Extensive expertise with administering HPC job schedulers (Slurm preferred) and parallel file systems (Lustre, BeeGFS, GPFS)
Technical Expertise – All Levels
* Familiarity with containerisation, workflow automation and orchestration tools used in bioinformatics and AI/ML
* Skilled in scripting and automation using Python, Bash and configuration management tools (Ansible, Terraform)
* Demonstrated experience profiling and optimising scientific or machine‑learning workloads on large‑scale clusters
* Understanding of distributed computing frameworks and GPU‑based acceleration techniques
Benefits
* Enhanced holiday pay
* Pension
* Life Assurance
* Income Protection
* Private Medical Insurance
* Hospital Cash Plan
* Therapy Services
* Perk Box
* Electrical Car Scheme
Terms of Appointment
* Applicants must have the right to work in the United Kingdom. Exceptional international applicants may be considered for sponsorship where appropriate
* Must be based in, or within easy commuting distance of, Oxford
* During peak periods, some longer hours and working across multiple time zones may be required due to the global nature of the programme
Further Information
Work for EIT means joining a collaborative, inclusive team that values curiosity, emotional intelligence and a shared commitment to excellence. If you are motivated to make an impact through high‑performance computing in biological research, we encourage you to apply.
#J-18808-Ljbffr