What you’ll be doing
– your accountabilities
1. Lead the design and implementation of robust, scalable, and secure data solutions using AWS services such as S3, Glue, Lambda, Redshift, EMR, Kinesis, and more—covering data pipelines, warehousing, and lakehouse architectures.
2. Drive the migration of legacy data workflows to Lakehouse architectures, leveraging Apache Iceberg to enable unified analytics and scalable data management.
3. Operate as a subject matter expert across multiple data projects, providing strategic guidance on best practices in design, development, and implementation.
4. Build and optimise data pipelines for ingestion, transformation, and loading from diverse sources, ensuring high standards of data quality, reliability, and performance.
5. Own the development of automation and monitoring frameworks that capture operational KPIs and pipeline health metrics, enabling proactive performance management.
6. Identify and resolve performance bottlenecks in data workflows, ensuring optimal resource utilisation and cost-efficiency.
7. Collaborate closely with architects, Product Owners, and development teams to decompose solutions into Epics, leading the design and planning of technical components.
8. Mentor and coach engineering professionals, fostering a culture of continuous learning, innovation, and technical excellence.
9. Champion inclusive and open team culture, leading complex projects autonomously and facilitating high-impact technical discussions.
10. Define and manage service level agreements (SLAs) for data products and production processes, ensuring reliability and accountability.
11. Develop and optimise data science procedures, including storage strategies using distributed structures, databases, and other scalable technologies.
12. Lead the implementation of continuous improvement initiatives, enhancing team processes and delivery capabilities.
13. Serve as a trusted advisor to internal stakeholders, including data science and product teams, translating complex technical concepts into actionable solutions.
Skills Required:
14. Possess deep technical expertise in data engineering, with a strong command of modern practices and methodologies.
15. Recognised as an expert in AWS cloud services, particularly in designing and implementing scalable data engineering solutions.
16. Bring extensive experience in software architecture and solution design, ensuring robust and future-proof systems.
17. Hold specialised proficiency in Python and Apache Spark, enabling efficient processing of large-scale data workloads.
18. Demonstrate the ability to set technical direction, uphold high standards for code quality, and optimise performance in data-intensive environments.
19. Adept at using automation tools and CI/CD pipelines to streamline development, testing, and deployment processes.
20. An exceptional communicator, capable of translating complex technical concepts for diverse audiences including engineers, product managers, and senior leadership.
21. Provide thought leadership within engineering teams, fostering a culture of quality, efficiency, and collaboration
22. Experienced in mentoring engineers, guiding them in advanced coding practices, architectural thinking, and strategic problem-solving to elevate team capabilities.
Experience you’d be expected to have
23. Former Principal Engineer with a proven track record of leading teams in best practices across design, development, and implementation. Known for mentoring engineers and cultivating a culture of continuous learning and innovation.
24. Extensive background in software architecture and solution design, with deep expertise in microservices, distributed systems, and cloud-native architectures.
25. Advanced proficiency in Python and Apache Spark, with a strong focus on ETL data processing and scalable data engineering workflows.
26. In-depth technical knowledge of AWS data services, with hands-on experience implementing data pipelines using tools such as EMR, AWS Glue, AWS Lambda, Step Functions, API Gateway, and Athena.
27. Proven experience in designing and delivering Lakehouse architectures, enabling unified analytics across structured and unstructured data.
A FEW POINTS TO NOTE:
Although these roles are listed as full-time, if you’re a job share partnership, work reduced hours, or any other way of working flexibly, please still get in touch.
We will also offer reasonable adjustments for the selection process if required, so please do not hesitate to inform us.
DON'T MEET EVERY SINGLE REQUIREMENT?
Studies have shown that women and people who are disabled, LGBTQ+, neurodiverse or from ethnic minority backgrounds are less likely to apply for jobs unless they meet every single qualification and criteria. We're committed to building a diverse, inclusive, and authentic workplace where everyone can be their best, so if you're excited about this role but your past experience doesn't align perfectly with every requirement on the Job Description, please apply anyway - you may just be the right candidate for this or other roles in our wider team.