Connecting diverse data & AI talent with forward-thinking teams
We have an exciting opportunity for a Data Engineer to join the team in Newcastle or London. You will work closely with business and technology teams across Wealth Management Europe (WME) to support the ongoing maintenance and evolution of the Data Lakehouse platform. The primary focus will be the ingestion and modelling of new data, and the evolution of the platform itself utilising new technologies to improve performance and accuracy of the data.
Responsibilities
* Develop and maintain the Data Lakehouse platform infrastructure using Microsoft Azure, including Databricks and Data Factory.
* Manage data pipelines through all stages from data acquisition to consumption, creating, maintaining and optimizing them as workloads move from development to production.
* Create and modify Notebooks, Functions and Workflows to support efficient reporting and analytics.
* Develop and maintain Dev, UAT and Production environments ensuring consistency.
* Use innovative tools, techniques and architectures to automate repeatable data preparation and integration tasks, reducing manual effort and errors.
* Use GitHub (or other version control tools) and perform data and schema comparisons via Visual Studio.
* Champion DevOps processes, ensuring latest techniques are used and release processes are followed; ensure productionised code is documented, reviewed and unit tested.
* Identify and model internal process improvements to automate manual processes and optimise data delivery for scalability as part of the end‑to‑end data lifecycle.
* Stay curious and knowledgeable about new data initiatives, applying domain understanding to address new requirements and proposing innovative ingestion, preparation, integration and operationalisation techniques.
What you need to succeed
Must‑have
* Proven experience with Data Engineering and Data Management architectures such as Data Warehouse, Data Lake, Data Hub and supporting processes (Data Integration, Governance, Metadata Management).
* Proven experience working in cross‑functional teams and collaborating with business stakeholders on departmental or multi‑departmental data initiatives.
* Strong experience with relational database programming languages (SQL, T‑SQL).
* Experience on a cloud data platform such as Databricks or Snowflake.
* Adept in agile methodologies and able to apply DevOps and DataOps principles to data pipelines.
* Basic experience with data governance, data quality and data security teams.
* Good understanding of datasets, Data Lakehouses, modelling, database design and programming.
* Knowledge of Data Lakehousing techniques, solutions and methodologies.
* Strong experience supporting and working with cross‑functional teams in a dynamic business environment.
* Highly creative and collaborative, working closely with business and IT teams to define problems, refine requirements, and design and develop data deliverables.
Nice-to-have
* Knowledge of Terraform or other Infrastructure‑as‑code tools.
* Experience with advanced analytics scripting languages such as Python, Java, C++, Scala, R.
* Experience using automated unit testing methodologies.
What is in it for you
* Access to leaders who support development through coaching and management opportunities.
* Opportunity to work with the best in the field.
* Ability to make a lasting impact.
* Collaboration in a dynamic, progressive, and high‑performing team.
Inclusion and Equal Opportunity Employment
At RBC, we believe an inclusive workplace that celebrates diverse perspectives is core to our growth. We maintain a workplace where employees feel supported to perform at their best, collaborate effectively, drive innovation, and grow professionally. RBC strives to create respect, belonging, and opportunity for all.
#J-18808-Ljbffr