Your new role
As a Data Engineer, you’ll design, build, and maintain the systems that collect, store, process, and analyse large datasets across the organisation. You’ll develop robust data pipelines, engineer data warehouses and data lakes, and ensure data is accurate, reliable, and secure. You’ll work across distributed systems, performance optimisation, and cloud‑based data solutions to support current and future analytical needs.
You will be responsible for:
1. Building and maintaining data pipelines that deliver durable, consistent, and high‑quality data
2. Designing and implementing data warehouses and data lakes suited to large volumes and high‑velocity workloads
3. Ensuring data platforms meet security, accessibility, and performance standards
4. Supporting production systems and optimising performance across distributed environments
What you'll need to succeed
5. Strong hands‑on experience with Python, PySpark, and SQL
6. Experience with AWS is highly advantageous
7. Solid proficiency in Core Java (Collections, Concurrency, Memory Management)
8. Skilled in version control tools such as Git, GitLab, or Bitbucket
9. Strong background in performance tuning, profiling, and resolving issues in distributed systems
10. Experience working in Agile teams and collaborative engineering environments
What you'll get in return
This is a 6-month interim opportunity paying up to £395 per day. It is hybrid working with 2 days on-site in Glasgow per week.