Your new company
Working for a renowned financial services organisation
Your new role
We are seeking a Data Engineer to support the replacement of a legacy ETL tool with a modern Apache Spark based data platform. This is a hands-on engineering role focused on building and supporting Spark jobs, with an emphasis on performance, reliability, and scalability.
The role is focused on building nonperformance Apache Spark jobs, with a strong emphasis on performance optimisation. Working in containerised environments using Kubernetes is a key element also as well as experience across Python/ Scala and Java.
The role sits within a small Agile delivery team of four engineers (two onshore and two in Shenzhen), working closely with a Senior Data Engineer. You will be responsible for development work, sprint delivery, demos, documentation, and stakeholder engagement. This position suits a mid to senior level engineer with strong Spark development experience rather than design, infrastructure, or management responsibilities.
What you'll need to succeed
1. Strong hands-on experience with Apache Spark - Writing and tuning Spark jobs /PySpark development experience.
2. Experienced with Airflow and SQL.
3. Strong experience working in with containerised environments using Kubernetes.
4. Experience with programming in Python or Scala
5. Experien...