Overview
Primus is partnering with a leading Financial Services client who are embarking on a greenfield data transformation programme. Their current processes offer limited digital customer interaction, and the vision is to modernise these processes by:
* Building a modern data platform in Databricks
* Creating a single customer view across the organisation.
* Enabling new client-facing digital services through real-time and batch data pipelines.
You will join a growing team of engineers and architects, with strong autonomy and ownership. This is a high-value greenfield initiative for the business, directly impacting customer experience and long-term data strategy.
Key Responsibilities
* Design and build scalable data pipelines and transformation logic in Databricks
* Implement and maintain Delta Lake physical models and relational data models.
* Contribute to design and coding standards, working closely with architects.
* Develop and maintain Python packages and libraries to support engineering work.
* Build and run automated testing frameworks (e.g. PyTest).
* Support CI/CD pipelines and DevOps best practices.
* Collaborate with BAs on source-to-target mapping and build new data model components.
* PySpark and SparkSQL.
* Strong knowledge of relational database modelling
* Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes).
* ADF or Synapse pipelines for orchestration.
* Familiarity with CI/CD and DevOps principles.
* Data Vault 2.0.
* Data Governance & Quality tools (e.g. Great Expectations, Collibra).
* Terraform and Infrastructure as Code.
* Experience with DLT / Lakeflow Declarative Pipelines:
Seniority level
* Mid-Senior level
Employment type
* Contract
Job function
* Information Technology
#J-18808-Ljbffr