Salary: £50,000 - 90,000 per year Requirements: Bachelors or Masters degree in Computer Science, Software Engineering, Data Science, or a related field, or equivalent practical experience. Experience as a Site Reliability Engineer or a similar role with a strong focus on data infrastructure management. Understanding of Site Reliability Engineering (SRE) practices. Proficiency in data technologies such as relational databases, data warehousing, big data platforms (e.g., Hadoop), data streaming (e.g., Kafka), and cloud services (e.g., AWS, GCP, Azure). Programming skills in languages like Python (NumPy, pandas, PySpark), Java (Core Spark with Java, functional interface, collections), or Scala with experience in automation and scripting. Experience with containerization and orchestration tools like Docker and Kubernetes is a plus. Experience with data governance, data security, and compliance best practices on GCP. Understanding of software development methodologies and best practices, including version control (e.g., Git) and CI/CD pipelines. Any experience in cloud computing and data-intensive applications and services, ideally Google Cloud Platform (GCP) would be highly beneficial. Experience with data quality assurance and testing on GCP. Proficiency with GCP data services (BigQuery, Dataflow, Data Fusion, Dataproc, Cloud Composer, Pub/Sub, Google Cloud Storage). Understanding of logging and monitoring using tools such as Cloud Logging, ELK Stack, AppDynamics, New Relic, and Splunk. Knowledge of AI and ML tools is a plus. Google Associate Cloud Engineer or Data Engineer certification is a plus. Experience in data engineering or data science. Responsibilities: Automate data tasks on Google Cloud Platform (GCP). Work with data domain owners, data scientists, and other stakeholders to ensure data is consumed effectively on GCP. Design, build, secure, and maintain data infrastructure, including data pipelines, databases, data warehouses, and data processing platforms on GCP. Measure and monitor the quality of data on GCP data platforms. Implement robust monitoring and alerting systems to proactively identify and resolve issues in data systems. Respond to incidents promptly to minimize downtime and data loss. Develop automation scripts and tools to streamline data operations and accommodate growing data volumes and user traffic. Optimize data systems to ensure efficient data processing, reduce latency, and improve overall system performance. Collaborate with data and infrastructure teams to forecast data growth and plan for future capacity requirements. Ensure data security and compliance with data protection regulations. Implement best practices for data access controls and encryption. Continuously assess and improve data infrastructure and data processes to enhance reliability, efficiency, and performance. Maintain clear and up-to-date documentation related to data systems, configurations, and standard operating procedures. Technologies: AI AWS Azure Big Data BigQuery CI/CD Cloud Composer Docker ELK GCP Git Hadoop Java Kafka Kubernetes Python PySpark Scala Security Spark Splunk numpy pandas Support More: At CME Group, we are the worlds leading derivatives marketplace, and we provide a unique opportunity for our employees to impact markets worldwide, transform industries, and shape tomorrows career landscape. We invest in your success while fostering a collaborative atmosphere among a team of leading experts. We offer a comprehensive benefits package which includes a bonus programme, equity programme, employee stock purchase plan, private medical and dental coverage, mental health benefits, group pension plan, and ongoing employee development. We value diversity and are committed to ensuring that every employees unique experiences and skills are recognized. Come join us in our hybrid working environment at CME Group, where futures are made. last updated 18 week of 2026