Build Something From the Ground Up We’re embarking on a digital transformation where data sits at the core of how we operate, make decisions, and scale. This isn’t a role where you inherit a mature platform — you’ll be helping design and build it from scratch. As our Data Engineer, you’ll lay the technical foundations of a modern, cloud-native data platform built on Microsoft Fabric. You’ll design, develop, and optimise the pipelines, architecture, and data structures that power analytics, AI-driven insights, and business reporting. The foundation you build won’t just support today’s reporting — it will enable machine learning, intelligent automation, and AI capabilities that define how we operate tomorrow. Your work will directly shape how the entire organisation accesses, trusts, and acts on data. What You’ll Be Doing Working closely with the Digital Transformation Director, Group IT Manager, and business stakeholders, your day-to-day will include: • Designing and building scalable, reliable data pipelines within the Microsoft Fabric ecosystem • Developing and maintaining Lakehouse and Data Warehouse solutions • Ingesting data from REST APIs, databases, and third-party systems into a centralised platform • Transforming, modelling, and structuring data to support analytics and Power BI reporting • Ensuring data quality, integrity, and consistency across all data assets • Monitoring, troubleshooting, and continuously optimising pipeline performance • Collaborating with Power BI developers to deliver clean, analytics-ready datasets • Preparing and structuring high-quality datasets to support AI and machine learning workloads • Implementing and maintaining data architecture standards, patterns, and best practices • Supporting the adoption of Microsoft Fabric’s AI and Copilot capabilities as they mature What We’re Looking For We don’t expect every candidate to have every skill below. If you’re strong in most areas and genuinely excited about building something meaningful, we want to hear from you. Microsoft Fabric • Hands-on experience with Lakehouse, Data Warehouse, and pipeline components • Understanding of modern, cloud-native data architecture principles • Awareness of Fabric’s AI and Copilot features and how they integrate into data workflows (these are emerging capabilities — not a day-one requirement) Data Engineering Fundamentals • Proven experience designing and managing ETL/ELT pipelines • Comfortable working with large, complex datasets in production environments • Strong understanding of data modelling principles including star schema and dimensional modelling SQL & Data Transformation • Advanced SQL query writing and performance optimisation • Experience transforming data across relational and non-relational sources API & Integration • Experience working with REST APIs, authentication methods, and data ingestion pipelines • Ability to operationalise external and third-party data sources reliably AI & ML Awareness • Understanding of how AI and ML pipelines consume and depend on structured data • Familiarity with preparing datasets for model training, inference, and AI-powered reporting • Curiosity about how tools like Microsoft Fabric Copilot can accelerate data workflows Governance & Quality • Awareness of data governance frameworks, performance tuning, and optimisation • Commitment to long-term data quality, maintainability, and reliability What You’ll Bring • A mindset focused on building data solutions that are scalable, reliable, and future-proof • Strong problem-solving instincts and confidence tackling complex data challenges • The ability to communicate clearly across both technical and business audiences • A proactive approach — you look for ways to improve without waiting to be asked • Ownership mentality — you care about what happens after you ship it