Requirements
Must have:
- Proven experience building production-ready solutions on Google Cloud - Expertise with batch and streaming frameworks like Apache Spark or Beam - Strong understanding of data storage, pipeline patterns, and event-driven architectures - Experience with CI/CD, version control, automated testing, and Agile delivery - Ability to communicate clearly to both technical and non-technical stakeholders - Mentoring or coaching experience - Bonus skills: Kafka, enterprise data platform migrations, RDBMS experience (Postgres, MySQL, Oracle, SQL Server), and exposure to ML pipelines - Eligible for UK Security Clearance (SC or DV) if required
Responsibilities:
- Lead the design, development, and deployment of scalable data pipelines using BigQuery, Dataflow, Dataproc, and Pub/Sub - Automate ETL/ELT workflows and orchestrate pipelines with tools such as Cloud Composer - Contribute to architecture and end-to-end solution design for complex data platforms - Set engineering standards and ensure high-quality code, deployment, and documentation practices - Collaborate with clients and internal teams, translating business requirements into practical solutions - Mentor and coach junior engineers to grow their skills and adopt best practices
Company:
We are partnering with a leading technology consultancy that helps organizations harness the power of data to modernize platforms and drive business outcomes. As a Data Engineer, you will be at the forefront of designing and delivering cloud-native solutions on Google Cloud, turning complex datasets into actionable insights. This role offers a chance to work on high-impact projects in a supportive environment where mentoring and learning are highly valued. Your work will directly contribute to the success of complex data programs.