Fearlessly, and take the team with you. Own it and back yourself - Own the basics, own your role and own the results. Be relevant - Relevant to our people, our partners, and the planet.
Job Description
We are currently looking for a Mid-Senior Data Engineer to join our growing Data Engineering team. The role involves developing, maintaining, supporting, and integrating our data systems. You will design, build, and maintain scalable data pipelines that support our operational and analytical platforms, working with diverse data sources, including real-time streams and micro-batches.
The ideal candidate will have extensive knowledge of Databricks, with a strong focus on building robust data pipelines using Databricks, Spark, and Delta Lake. You will code, test, and document data systems, creating secure, scalable pipelines for operational data and analytics, supporting both current and legacy systems.
Key Responsibilities:
1. Design, build, and optimize real-time data pipelines using Databricks, Spark, and Delta Live Tables.
2. Implement Change Data Capture (CDC) mechanisms for near real-time data processing.
3. Leverage Delta Lake features such as ACID transactions, time travel, and schema evolution.
4. Manage data governance policies with Unity Catalog, including data lineage and access controls.
5. Develop secure data sharing solutions with Delta Sharing.
6. Integrate data pipelines with web services and APIs.
7. Work extensively with Azure data services like ADLS Gen2, Azure Functions, and Event Hubs.
8. Apply data modelling principles for designing data warehouses and data models.
9. Monitor pipeline performance and optimize for low latency and high throughput.
10. Collaborate with cross-functional teams and mentor junior engineers.
Qualifications:
* Proven expertise with Databricks Lakehouse Platform.
* Strong Spark, PySpark, and SQL skills.
* Experience with real-time data processing and Delta Lake.
* Knowledge of data governance, CDC, and secure data sharing.
* Experience integrating with web services and APIs.
* Solid Azure data services experience.
* Understanding of data modelling and warehousing.
* Experience with CI/CD pipelines.
* Excellent problem-solving and communication skills.
Desirable: Experience with GCP and BigQuery.
At Frasers Group, we are committed to innovation and excellence in retail, offering a dynamic and rewarding environment for our employees. We provide various benefits and opportunities for growth, including recognition schemes and performance bonuses.
#J-18808-Ljbffr