Data Engineer | Bristol (Hybrid) | £40,000 - £50,000 P.A | Permanent
Peaple Talent have partnered with an existing client in Bristol looking to recruit a Data Engineer.
The successful candidate will collaborate to build, enhance, and support both data-driven solutions and our clients analytics infrastructure, from initial design through to testing and ongoing development.
Key Responsibilities:
* Design, develop, and sustain high-performance data pipelines and infrastructure that are both scalable and aligned with company goals and technical frameworks.
* Participate in Agile team practices and contribute to essential technical workflows, including peer code reviews.
* Partner with other data engineers to exchange expertise and collaboratively deliver impactful data solutions.
* Stay current with emerging technologies and contribute to innovation through hands-on prototypes and exploratory projects.
* Actively support a culture of ongoing enhancement, identifying areas for process automation and operational efficiency.
* Translate complex engineering concepts into accessible insight for audiences without technical backgrounds.
* Diagnose and resolve intricate data challenges while fine-tuning existing systems to improve speed, reliability, and performance.
* Work closely with multidisciplinary teams to gather requirements and ensure solutions are tailored to evolving business priorities.
* Establish and maintain data quality controls, validation mechanisms, and monitoring tools to uphold data integrity and comply with governance protocols.
Key Experience Required:
* Hands-on expertise with SQL, Python, and PySpark, used to build, test, and maintain robust data pipelines for processing both structured and semi-structured datasets.
* Familiar with Agile delivery frameworks, and using tools such as Azure DevOps for planning and tracking work.
* Solid grasp of data modelling principles and commonly used patterns in pipeline architecture and design.
* Confident using Git-based version control systems, including Azure DevOps or similar.
* Skilled in managing and scheduling data workflows using orchestration platforms like Apache AirFlow.
* Involved in building and optimizing data warehouses on modern analytics platforms like Snowflake, Redshift, or Databricks.
* Familiar with visual or low-code data integrations tools, including platforms such as SnapLogic or Talend.
* Hands-on experience working with APIs for data ingestion, as well as interacting with cloud infrastructure using APIs, SDKs, or command-line tools.
* Proficient in designing cloud-native data pipelines using services on platforms like AWS (including Lambda, S3, Glue, Redshift, Athena, Secrets Manager) or comparable cloud tools.