The role
Position: Data Engineer
Contract type: Full Time/Permanent
Reporting to: Head of Data
Location: London
Overview of role
Zodiac Maritime is undergoing an exciting data transformation, and we’re looking for a talented Data Engineer to join our growing data team. In this role, you’ll be instrumental in building and deploying modern Azure Databricks based data solutions, enabling the business to make faster, data-driven decisions.
You’ll work hands-on with Azure Databricks, Azure Data Factory, Delta Lake, and Power BI to design scalable data pipelines, implement efficient data models, and ensure high-quality data delivery. This is a fantastic opportunity to shape the future of data at Zodiac Maritime while working with cutting-edge cloud technologies.
Key responsibilities and primary deliverables
* Design, develop, and optimize end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake.
* Implement Medallion Architecture to structure raw, enriched, and curated data layers efficiently.
* Build scalable ETL/ELT processes with Azure Data Factory and PySpark.
* Work with Data Architecture to enforce data governance using Azure Purview and Unity Catalog for metadata management, lineage, and access control.
* Ensure data consistency, accuracy, and reliability across pipelines.
* Collaborate with analysts to validate and refine datasets for reporting.
* Apply DevOps & CI/CD best practices (Git, Azure DevOps) for automated testing and deployment.
* Optimize Spark jobs, Delta Lake tables, and SQL queries for performance and cost efficiency.
* Troubleshoot and resolve data pipeline issues proactively.
* Partner with Data Architects, Analysts, and Business Teams to deliver end-to-end solutions.
* Stay ahead of emerging data technologies (e.g., streaming with Kafka/Event Hubs, Knowledge Graphs).
* Advocate for best practices in data engineering across the organization.
Skills profile
Relevant experience & education
* Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse.
* Strong understanding of Lakehouse architecture and medallion design patterns.
* Proficient in Python, PySpark, and SQL (advanced query optimization).
* Experience building scalable ETL pipelines and data transformations.
* Knowledge of data quality frameworks and monitoring.
* Experience with Git, CI/CD pipelines, and Agile methodologies.
* Ability to write clean, maintainable, and well-documented code.
* Experience of Power BI or other visualization tools.
* Knowledge of IoT data pipelines.
Due to the high volume of applications, we regret that only shortlisted candidates will be contacted.