We are looking for an experienced Data Engineer to design, develop, and maintain robust data pipelines for Deposits and Treasury applications. You will work with large-scale datasets to enable financial reporting and analytics use cases, collaborating closely with business stakeholders and cross-functional teams.
Responsibilities:
* Design, develop, and maintain data pipelines for Deposits and Treasury applications.
* Work with large-scale structured and unstructured datasets using Apache Spark / PySpark.
* Develop high-quality, reusable, and efficient code in Python.
* Collaborate with business stakeholders to understand data requirements related to treasury products, liquidity, and deposits.
* Build and optimize ETL/ELT processes for ingestion, transformation, and integration.
* Support data modelling for financial reporting and analytics.
* Create and maintain dashboards and visualizations using Amazon Quicksight, Power BI, Tableau (or similar).
* Ensure data quality, governance, and compliance with financial regulations.
* Troubleshoot performance issues and optimize workflows.
* Work closely with analysts, architects, and product owners in an Agile environment.
Mandatory Skills:
* 6+ years of experience in Data Engineering.
* Strong background building and maintaining pipelines for banking data domains.
* Hands-on expertise with Apache Spark / PySpark for large-scale processing.
* Strong Python development skills with emphasis on reusable, efficient code.
* Solid ETL/ELT engineering experience (ingestion, transformation, integration).
* Experience supporting data modelling for reporting/analytics use cases.
* Exposure to BI/dashboarding tools (Quicksight, Power BI, Tableau).
* Practical experience with data quality, governance, and regulated environments.
Language & Seniority:
* English: Advanced (C1).
* Senior level.