Salary: £120,000 - 144,000 per year Requirements: Strong hands-on experience in designing and delivering enterprise-scale data pipelines using AWS Glue and PySpark Experience in building and optimizing ETL processes Proficient in working with raw and curated datasets Knowledge of best practices around data modeling, data quality, and automation Solid background in Spark-based engineering, particularly with PySpark Familiarity with Glue jobs, Glue Catalog, S3, and other AWS native services Experience working within a modern cloud data stack Understanding of data structuring for analytics, reporting, and downstream consumption Responsibilities: Develop and optimize scalable, production-grade data workflows Integrate data from multiple systems Ensure data is processed efficiently and to a high standard Build and improve ETL processes Collaborate with team members to maintain data quality and integrity Apply best practices in data modeling and automation Technologies: AWS AWS Glue Cloud ETL PySpark Spark More: We are supporting a university with a major data platform transformation project as they implement AWS across their environment. We are seeking a Data Engineer for a remote position, ideally with a solid background in Spark-based engineering. The role involves a 3-month contract and offers a competitive rate of £500-£600 per day. Join us and be part of a transformative journey in a modern cloud data stack. last updated 10 week of 2026