My client is a rapidly scaling AI company transforming global debt markets. Their platform centralises proprietary credit data, deep analysis, and high-value workflows for 300+ leading financial institutions worldwide. Strong product-market fit, international expansion underway.
The Role
A senior data engineering position with real ownership over the pipelines and infrastructure powering a data-heavy financial platform. You'll work closely with data scientists, backend engineers, and product teams to keep data reliable, scalable, and fit for purpose — at institutional scale.
What You'll Be Doing
* Designing and building scalable data pipelines and ETL/ELT workflows
* Maintaining and evolving the data warehouse and underlying data models
* Ensuring data quality, reliability, and observability across the platform
* Supporting data scientists and analysts with the infrastructure they need to work effectively
* Contributing to data architecture decisions as the platform scales internationally
What They're Looking For
* 5+ years in data engineering
* Strong SQL and data modelling fundamentals — these matter more than any specific tool
* Experience with pipeline orchestration (Airflow, Prefect, or similar)
* Cloud data infrastructure — AWS, GCP, or Azure
* Comfortable with transformation tooling (dbt or similar)
* Exposure to distributed processing (Spark, Databricks) is a plus
* Scripting ability in Python or similar
* High standards, collaborative mindset, and ownership mentality
What You Get
* Ownership over data infrastructure on a platform used by 300+ major financial institutions
* A genuinely scaling company with strong PMF and active international expansion
* Senior scope with real influence over how the data platform evolves