ETEL group are in the process of an exciting Technology Transformation Programme for Stapleton’s Tyre Services.
Over the course of the transformation programme, Stapleton’s are migrating their current IT systems to best of breed, cloud-based solutions. A central part of this transformation is the recently developed Data Platform.
The Data Platform Manager (DPM) owns the ETEL data platform and is tasked with maintaining and improving its performance. This is a hands-on product management role: you will shape and run the platform day-to-day, lead and coach a small data engineering team, and partner with the wider business to keep the platform reliable, well-governed and genuinely useful.
We are looking for someone who is technically hands-on. You should be comfortable working alongside your team — building and debugging PySpark pipelines, designing data models, troubleshooting in Fabric/Synapse — as well as managing the product roadmap. You will own data engineering delivery end-to-end: ensuring the right data is provisioned to our lake and warehouse environments, that it\'s accessible to downstream applications and analytics teams, and that appropriate governance is applied across every data product throughout its lifecycle.
The DPM acts as a subject matter expert across the organisation, liaising with business product managers, technical staff and data consumers to ensure business goals are met. A key aspect of this role is the ability to forge strong working relationships with key departments and personnel to get things done and ensure that the data in the data platform remains trusted and useful.
The ETEL IT department are adopting hybrid working where each team member works both remotely and in the office, to suit the requirement of their role. Occasional requirement to travel to our central offices in Letchworth as well as other UK office locations.
The Day to Day
Product Management
* Product manage the ETEL data platform (Microsoft Fabric / Azure Synapse).
* Work closely with business product owners and data scientists to ensure the data platform meets business needs.
* Own the data product lifecycle end-to-end: from intake and design through build, release, ongoing support, versioning and eventual retirement.
* Ensure that solutions are delivered in line with agreed architectural strategies and roadmaps.
* Ensure data is secure, accessible, well-structured and of high quality, with continuous improvement in ingestion and processing.
* Lead datalake migration activities where legacy or acquired data estates need to be consolidated onto the platform — including medallion structuring, schema evolution, backfill and cutover planning.
* Provide hands-on data engineering support and expertise across the business, including building and optimising PySpark pipelines.
* Promote and identify opportunities for data re-use, minimising copies of data across the organisation.
* Maintain and communicate data management related technical principles, policies, standards, and guidelines.
* Maintain data catalogues, data path and integration documentation.
* Lead the development and support regarding data loading (ETL), data platforms and business intelligence architecture.
* Liaise with the ETEL Data Protection Officer to ensure ETEL remains within data protection legislation and guidelines.
Team Management
* Lead and develop a small technology data product team to develop and support the ETEL Data Platform.
* Complete monthly documented one to one reviews with your team members, focusing on wellbeing, HR matters, delivery of assigned activities and business and personal development needs.
* Build and promote a culture of continuous improvement across your team, including improvements to methods used, delivery speed, quality and improving team skills and learning. Build and maintain strong working relationships with stakeholders across the business including operations, reporting and data analytics teams.
Who we\'re looking for
* Passion for data and how it supports the business.
* Demonstrable hands-on data engineering experience, including building and running production pipelines in PySpark.
* Experience with Microsoft Fabric and/or Azure Synapse; equivalent cloud data platform experience (Databricks, AWS Lake Formation) also welcome.
* Strong ability to design, build and manage data pipelines — transformation, data models, schemas, metadata and workload management.
* Solid working experience with relational databases and SQL.
* Understanding of datalake migration concepts and patterns — medallion architecture, schema evolution, incremental loads, backfill and cutover strategies.
* Strong grasp of data governance — cataloguing, lineage, access management, quality, classification — and how these apply across the data product lifecycle.
* Experience with enterprise BI and analytics tools such as Power BI and Tableau.
* Experience with DevOps practices — version control, automated builds, testing and release management using Azure DevOps and Git.
* Knowledge of data integration tools and APIs
* Understanding of relevant data protection legislation and its impact on data handling.
* Good numerical and analytical skills.
* Strong problem-solving in a data platform environment.
* Ability to spot industry trends and connect them to organisational needs.
* Strong communication and interpersonal skills; respected as a manager and team player; able to work under pressure.
* Practical experience with project management tools and SDLC methodologies.
Professional Experience
* More than three years experience in data or technology roles.
* At least two years of experience in data engineering.
* Comprehensive experience with data modelling tools.
* Practical experience with DevOps tooling
* Strong knowledge of Python
* Knowledge of PowerBI or Tableau
Required Qualifications, Certificates or Licences
* Minimum of Degree level or equivalent experience.
* Extensive industry experience in data management type roles.
* Other Technical qualifications will be considered.
#J-18808-Ljbffr