Salary: £60,000 - 66,000 per year Requirements: Strong experience developing ETL/ELT pipelines using PySpark and Python Hands-on experience with Microsoft Fabric lakehouse or similar cloud data platforms (Azure Synapse Analytics, Databricks) Proficiency in working with Jupyter/Fabric Notebooks for data engineering workflows Solid understanding of data lakehouse architecture patterns and medallion architecture API Integration experience Experience working with Delta Lake or similar lakehouse storage formats Strong SQL skills for data manipulation, transformation, and quality validation Any previous experience within Manufacturing environments would be highly desirable Responsibilities: Support the Material Spend Project by extracting, transforming, and analyzing large data sets Drive actionable insights and support strategic decision-making based on data analysis Develop and maintain ETL/ELT pipelines using PySpark and Python Collaborate with teams on various projects involving cloud data platforms Utilize Jupyter/Fabric Notebooks for effective data engineering workflows Ensure data quality and integrity through SQL data manipulation and validation Integrate with APIs and work with lakehouse storage formats as required Be present onsite in Dudley, West Midlands for 2/3 days per month Technologies: API Azure Cloud Databricks ETL Fabric Support Jupyter Python PySpark SQL ServiceNow More: We are a forward-thinking company seeking a highly skilled Data Engineer to join our team for an initial 6-month contract. Our project focuses on material spending, requiring extensive data extraction and analysis from diverse sources. The day rate for this position ranges from £400 to £440 per day, and we are located in Dudley, West Midlands, where youll need to be onsite for a couple of days each month. We value flexibility, collaboration, and the drive to turn data into actionable insights. last updated 9 week of 2026