You will be responsible for:
Designing, implementing, and maintaining scalable data pipelines and ETL processes
Developing and optimising databases and data storage solutions for structured and unstructured data
Collaborating with data scientists and analysts to deliver reliable, high-quality data for analytics and reporting
Ensuring data quality, integrity, and compliance with security and governance standards
Supporting the adoption of best practices for data engineering and contributing to technical decision-making
Candidates should demonstrate:
A BEng/BSc or Master's degree in Computer Science, Data Engineering, Mathematics, or a related discipline
Strong programming skills in languages such as Python, SQL, and Java or Scala
Experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra)
Expertise in building and maintaining ETL pipelines and data workflows
Familiarity with cloud data platforms (AWS, Azure, GCP) and data pipeline orchestration tools (Airflow, Prefect, etc.)
Understanding of data modelling, schema design, and performance optimisation
Experience with agile development methodologies, including Scrum and Kanban
Familiarity with version control tools such as Git
Reasonable Adjustments:
Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients.
If you need any help or adjustments during the recruitment process for any reason, please let us know when you apply or talk to the recruiters directly so we can support you.
TPBN1_UKTJ