Your newpany
Working for a renowned financial services organisation
Your new role
Seeking a Data Engineer to help design and maintain scalable batch and near‑real‑time ingestion pipelines, modernizing legacy ETL/ELT processes into Azure and Snowflake, and implementing best‑practice patterns such as CDC, incremental loading, schema evolution, and automated ingestion frameworks. They build cloud‑native solutions using Azure Data Factory/Synapse, Databricks/Spark, ADLS Gen2, and Snowflake capabilities including stages, file formats, COPY INTO, and Streams/Tasks to support raw‑to‑curated data modelling.
The role involves creating reusableponents and Python libraries to accelerate delivery across teams, enforcing data quality through validation, observability, and robust pipeline design, and ensuring strong security,ernance, and documentation standards. Collaboration within agile workflows—including CI/CD, code reviews, and iterative planning—is also key to delivering consistent, reliable, and secure data solutions.
What you'll need to succeed
1. Strong hands-on data engineering experience, with strong focus on data ingestion
2. Experience building production pipelines using Azure Data Factory, Databricks, Synapse
3. Solid SQL skills and experience working with modern cloud data warehouses, ideally Snowflake
4. Proficiency in Python for data processing, automation, and pipeline utilities
5. Good understanding of data lake/lakehouse concepts and ingestion patterns
6. Infrastructure-as-Code exposure (Terraform) and CI/CD (Azure DevOps)
7. Able to prototype quickly while adhering to Group standards and controls
8. Familiarity with orchestration frameworks (Dagster) – desirable
9. Energymodity trading experience is a real advantage
What you'll get in return
Flexible working options available.