Data Engineer – Newcastle (Hybrid | 3 days onsite)
We are supporting the recruitment of Data Engineers to join a Global Consulting firm in their high-performing Advanced Technology Centre delivering large-scale data and technology solutions across both public and private sector clients.
This is a hands‑on engineering role focused on building modern, scalable data platforms that enable analytics, AI, and real‑time decision‑making. You’ll work within a collaborative, cloud‑first environment with strong emphasis on engineering best practice, continuous learning, and career development within a global delivery network.
Please note: This role requires BPSS and SC security clearance, meaning candidates must have 5 years continuous UK residency and be a British/EU passport holder or hold Indefinite Leave to Remain.
Responsibilities
* Design, build, and maintain scalable data pipelines supporting batch and real‑time processing
* Develop streaming and event‑driven solutions using technologies such as Kafka, Flink, or Spark
* Build and optimise data pipelines primarily using Java, with exposure to Python and modern data tooling
* Integrate data from multiple sources using AWS services (e.g. Kinesis, MSK, Lambda, Glue)
* Apply data engineering best practices including data modelling, lineage, governance, and quality controls
* Contribute to cloud‑based data architecture using modern patterns such as medallion architecture
* Support CI/CD pipelines and deployment processes using tools such as Jenkins, Azure DevOps, or GitHub Actions
* Implement Infrastructure as Code using Terraform or CloudFormation
* Work with containerised environments using Docker and Kubernetes (EKS)
* Collaborate with analytics, data science, and product teams to deliver high‑quality datasets
* Participate in code reviews and support knowledge sharing across engineering teams
* Provide mentorship and guidance to junior engineers where required
Required Skills & Experience
* 3+ years’ experience in data engineering or large‑scale data platform development
* Strong programming skills in Java (preferred) or Python
* Experience with Kafka, Flink, or Spark (streaming experience highly desirable)
* Strong understanding of streaming concepts (event time, state management, backpressure)
* Experience building ETL/ELT or real‑time data pipelines
* Strong knowledge of CI/CD practices and tools (Jenkins, Azure DevOps, GitHub Actions, etc.)
* Experience with cloud platforms, ideally AWS (Azure or GCP also considered)
* Hands‑on experience with Terraform and containerisation (Docker, Kubernetes/EKS)
* Exposure to distributed systems and large‑scale data architectures
* Understanding of data governance, security, and data quality principles
* Experience working in Agile delivery environments
* Strong communication skills with ability to engage technical and non‑technical stakeholders
* Exposure to Databricks, Snowflake, or BigQuery
* Cloud or data engineering certifications (AWS / Azure / GCP)
* Experience in consulting or client‑facing environments
* Experience mentoring or leading junior engineers
* Background in low‑latency or real‑time data systems
Summary
This is an excellent opportunity for a Data Engineer looking to work on modern cloud‑native data platforms within a fast‑paced, collaborative engineering environment. The role offers strong technical variety across streaming, cloud, and DevOps practices, along with clear opportunities for progression within a global technology organisation.
J-18808-Ljbffr