Job Description We are currently looking for a Mid-Senior Data Engineer to join us in our growing Data Engineering team; to help develop, maintain, support, and integrate our growing number of data systems. You will be instrumental in designing, building, and maintaining robust and scalable data pipelines that power our operational subscribers and analytical platforms. You will work with a diverse range of data sources, integrating both real-time streams and micro-batches, connecting with varied end points to move data at speed and at scale. The right candidate will have a wealth of knowledge in the data world with a strong focus on Databricks, and will be keen to expand upon their existing knowledge, learning new technologies along the way as well as supporting both future and legacy technologies and processes. You will be coding, testing, and documenting new or modified data systems; creating scalable, repeatable, secure pipelines and applications for both operational data and analytics, both internally and externally to the business. You will grow our capabilities, solving new data problems and challenges every day. Key Responsibilities: Design, Build, and Optimise Real-Time Data Pipelines: Develop and maintain robust and scalable stream and micro-batch data pipelines using Databricks, Spark (PySpark/SQL), and Delta Live Tables. Implement Change Data Capture (CDC): Implement efficient CDC mechanisms to capture and process data changes from various source systems in near real-time. Master the Delta Lake: Leverage the full capabilities of Delta Lake, including ACID transactions, time travel, and schema evolution, to ensure data quality and reliability. Champion Data Governance with Unity Catalog: Implement and manage data governance policies, data lineage, and fine-grained access control using Databricks Unity Catalog. Enable Secure Data Sharing with Delta Sharing: Design and implement secure and governed data sharing solutions to distribute data to both internal and external consumers without data replication. Integrate with Web Services and APIs: Develop and manage integrations to push operational data to key external services, as well as internal APIs. Azure Data Ecosystem: Work extensively with core Azure data services, including Azure Data Lake Storage (ADLS) Gen2, Azure Functions, Azure Event Hubs, and CI/CD. Data Modelling and Warehousing: Apply strong data modelling principles to design and implement logical and physical data models for our analytical and operational data stores. Monitoring and Performance Tuning: Proactively monitor data pipeline performance, identify bottlenecks, and implement optimizations to ensure low latency and high throughput. Collaboration and Mentorship: Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, and mentor junior data engineers