Responsibilities
:
1. Design and develop end-to-end data pipelines (batch and streaming) using Azure Databricks, Spark, and Delta Lake.
2. Implement the Medallion Architecture and ensure consistency across raw, enriched, and curated data layers.
3. Build and optimise ETL/ELT processes using Azure Data Factory and PySpark.
4. Enforce dataernance through Azure Purview and Unity Catalog.
5. Apply DevOps and CI/CD practices using Git and Azure DevOps.
6. Partner with analysts and business stakeholders to ensure data quality and usability.
7. Contribute to performance optimisation and cost efficiency across data solutions.
Required Skills & Experience:
8. Proven hands-on experience with Azure Databricks, Data Factory, Delta Lake, and Synapse.
9. Strong proficiency in Python, PySpark, and advanced SQL.
10. Understanding of Lakehouse architecture and medallion data patterns.
11. Familiarity with dataernance, lineage, and access control tools.
12. Experience in Agile environments, with solid CI/CD and Git knowledge.
13. Power BI experience desirable; exposure to IoT data pipelines a plus.
14. Ideally strong experience in logistics, oil & gas, shipping or transport and consulting
This contract offers an exciting chance to help shape a modern data platform within a business that values innovation and collaboration. If you're a proactive and technically strong data engineer looking to make an immediate impact, we'd like to hear from you.
McGregor Boyall is an equal opportunity employer and do not discriminate on any grounds.
Job ID BBBH167857