Data Engineer (Azure | Databricks | ETL Pipelines) Contract | Remote | UK Public Sector Project Ceox Services is seeking a Data Engineer to support delivery of key data components within our evolving data ecosystem. This role will work closely with architects, analysts, engineers, and wider delivery teams to implement pipeline builds, data models, and integration patterns aligned to the organisation’s new Target Operating Model for Data. You will contribute to the development of scalable dataflows, optimise ingestion & transformation activities, and ensure solutions meet technical standards, performance expectations, and security controls. This is a hands-on engineering role ideal for someone who enjoys building, shaping, and improving modern data platforms. Who We’re Looking For A highly capable Data Engineer with hands-on experience building ETL pipelines, implementing Data Lake Delta architectures, and working across Azure platform services. You will need strong problem-solving ability, a solid engineering mindset, and the ability to work collaboratively within a multi-disciplinary team. If you’re confident writing production-grade code, applying design patterns, and building scalable, high-performing data products, this role offers technical autonomy, ownership, and challenge. Key Responsibilities Build and maintain physical data models, ETL pipelines and code in cloud data platforms. Support ingestion activity and onboarding of new data sources. Assist in design, development and deployment of Azure platform services (Fabric, Synapse, ADLS). Work with Databricks, Delta Lake, Unity Catalogue, Delta Share for dataflows and collaboration. Construct curated, raw and refined data layers; catalogue assets appropriately. Validate solutions against functional and non-functional requirements. Deliver datasets, transformations and performance-optimised data products. Improve processes, engineering patterns, and reusable tooling. Monitor and measure pipeline performance; support incident resolution. Ensure documentation meets acceptance standards and is approved centrally. Actively engage in Agile ceremonies and governance forums. Mandatory Requirements Strong experience with Python, PySpark & SQL for data engineering. Hands-on experience with Azure Databricks. Strong knowledge of Fabric, Synapse, ADF & ADLS for ETL pipelines. Experience with Delta Lake, Parquet FS, Unity Catalogue & MS Purview. Familiarity with Event-driven data ingestion (Event Grid / Pub-Sub). Understanding of SOLID principles, Async programming, Mediator/Factory patterns. Experience delivering unit integration testing in Databricks. Knowledge of Secure ETL design with Entra IMID/SCIM integration. Understanding of Azure best practice, APIM, and platform governance. Ability to build and serve Power BI models via Databricks data sources. Desirable Prior experience working within UK Public Sector environments. Soft Skills Strong stakeholder communication and cross-team collaboration. Analytical and solution-focused mindset. Able to work independently, take ownership and drive progress. Commitment to clean, scalable, well-documented engineering. Adaptable, proactive, and comfortable working in dynamic delivery environments. Contract Details Contract Type: Freelance / Contract Location: Remote (UK-based candidates preferred) Start Date: ASAP Clearance: Candidates must be eligible to work with UK Government departments / BPSS.