Role Overview
We are seeking an experienced Databricks Data Engineer to design and deliver cloud-based data integration and analytics solutions within our Insurance portfolio. This is a hands-on role focused on building scalable data pipelines, optimising performance, and applying best practices in data quality and governance within enterprise-scale environments.
You will collaborate with senior data professionals on high-impact transformation programmes, translating business requirements into robust, production-ready data solutions. Insurance domain experience is beneficial but not essential.
Key Responsibilities
* Design and deliver scalable data pipelines using Azure Databricks, Azure Data Factory, and Azure SQL
* Build and maintain ETL/ELT processes and data quality frameworks
* Develop consistent data models and analytics-ready datasets
* Partner with stakeholders to deliver effective data solutions
* Produce technical documentation and architecture artefacts
* Stay current with Azure and data engineering best practices
Essential Experience
* 3+ years’ experience in data engineering within enterprise environments
* Strong expertise in Azure Databricks, ADF, Synapse, and Azure Data Lake
* Hands-on experience with Spark (PySpark / Spark SQL) and orchestration
* Strong data modelling and data warehousing knowledge
* Proficiency in SQL, Git, CI/CD, and Agile/DevOps ways of working
Desirable
* Insurance domain experience (policy, claims, regulatory data)
* Knowledge of enterprise data management, governance, metadata, and lineage
To be considered for this role, you must already be eligible to work in the United Kingdom without the need for sponsorship.