Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Data engineer azure data platform

Warrington
Synextra
Data engineer
Posted: 18h ago
Offer description

Job Description

About SynextraSynextra is a Microsoft-specialist Managed Service Provider headquartered in Warrington, operating as a premium partner to regulated mid-market organisations including law firms, financial services firms, and mortgage lenders. We're deliberately small - around 35 people - because we believe the best outcomes come from technical depth, not headcount. Our AI Services Division is growing fast, and we're building out a serious data and engineering capability to match. This is a chance to get in early and shape how that function operates.

The RoleWe're looking for a technically driven Azure Data Engineer to join our data platform team. You'll design, build, and maintain production-grade data pipelines on Microsoft Azure - transforming complex, diverse datasets into analytics-ready formats that power business intelligence and AI initiatives for our clients and internally.The ideal candidate treats pipelines and infrastructure as code, with a genuine passion for software engineering in a data context. You'll work across the modern Azure data stack - ADF, ADLS Gen2, PySpark, Delta Lake - with increasing exposure to Microsoft Fabric as the platform matures. You'll collaborate closely with customers and internal teams to ensure data is structured and governed for reliable downstream consumption.This is a hands-on engineering role with room to grow into leadership: you'll champion DevOps best practices, contribute to architectural decisions, and help mentor junior engineers as the team scales.

Responsibilities

* Architect and write production-grade ELT/ETL data pipelines using PySpark and Python within Azure ecosystem.
* Build custom, reusable data processing frameworks and libraries in Python/Scala to streamline ingestion and transformation tasks across the engineering team
* Programmatically ingest large volumes of structured and unstructured data from REST APIs, streaming platforms (e.g. Event Hubs, Kafka), and legacy databases into ADLS Gen2 and OneLake
* Develop structured data models aligned to Lakehouse, Medallion Architecture, and Delta Lake patterns
* Continuously profile, debug, and optimise Spark jobs, SQL queries, and Python scripts for maximum performance and cost-efficiency at scale
* Champion DevOps best practices: implement infrastructure-as-code (Terraform), automated testing, and CI/CD deployment pipelines via Git and Azure DevOps
* Identifying patterns in recurring issues and engineering permanent solutions
* Write comprehensive unit and integration tests for all data pipelines to ensure data integrity; enforce data governance protocols, RBAC, and encryption standards across all environments

Requirements

Essential Technical Skills

* Advanced proficiency in Python and PySpark, writing clean, modular, object-oriented code for data transformations
* Strong command of SQL (T-SQL, Spark SQL) for data exploration, validation, and final-stage modelling
* Deep hands-on experience with Microsoft Fabric and its tooling such as Azure Data Factory (ADF), and Azure Data Lake Storage (ADLS Gen2)
* Practical experience with Git, branching strategies, automated testing (e.g. pytest), and CI/CD orchestration via Azure DevOps
* Proven commercial track record of deploying complex data solutions on the Microsoft Azure platform
* Experience collaborating with a range of stakeholders to structure data for downstream consumption (e.g. MLflow, Power BI semantic models)
* Infrastructure-as-code experience with Terraform for Azure resource provisioning

Desirable Technical Skills

* Familiarity with streaming data architectures (Spark Structured Streaming)
* Knowledge of complementary modern data stack tools such as dbt for SQL-based transformations
* Experience integrating Large Language Models (LLMs) or operationalising AI/ML models

Personal Qualities

* Exceptional problem-solving abilities and a persistent, detail-oriented approach to debugging complex code
* Strong communication skills to effectively translate business requirements into technical architectures
* A proactive mindset focused on continuous learning and staying ahead of the rapidly evolving data landscape
* Willingness to review code submissions, enforce coding standards, and mentor junior engineers on the team

Preferred Background

* 3–5+ years in software engineering, data engineering, or Big Data environments with a code-first approach
* Proven commercial experience deploying and maintaining complex data solutions on Microsoft Azure
* Experience working in cross-functional teams

Apply
Create E-mail Alert
Job alert activated
Saved
Save
Similar job
Data engineer
Manchester
Anson Mccade
Data engineer
£90,000 a year
Similar job
Data engineer - aws
Manchester
Coventry Building Society
Data engineer
£50,000 a year
Similar job
Data engineer dv cleared
Chester
Datatech Analytics
Data engineer
See more jobs
Similar jobs
It jobs in Warrington
jobs Warrington
jobs Cheshire
jobs England
Home > Jobs > It jobs > Data engineer jobs > Data engineer jobs in Warrington > Data Engineer Azure Data Platform

About Jobijoba

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by Job Title
  • Jobs by Industry
  • Jobs by Company
  • Jobs by Location
  • Jobs by Keywords

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2026 Jobijoba - All Rights Reserved

Apply
Create E-mail Alert
Job alert activated
Saved
Save