Join our client in embarking on an ambitious data transformation journey using Databricks, guided by best practice data governance and architectural principles. The role is a 6‑month contract with potential extension, full‑time, hybrid with one day per week in Nottingham.
Accountabilities
* Develop and maintain scalable, efficient data pipelines within Databricks, continuously evolving them as requirements and technologies change.
* Build and manage an enterprise data model within Databricks.
* Integrate new data sources into the platform using batch and streaming processes, adhering to SLAs.
* Create and maintain documentation for data pipelines and associated systems, following security and monitoring protocols.
* Ensure data quality and reliability processes are effective, maintaining trust in the data.
* Take ownership of complex data engineering projects and develop appropriate solutions in accordance with business requirements.
* Work closely with stakeholders and manage their requirements.
* Actively coach and mentor others in the team and foster a culture of innovation and peer review to ensure best practice.
Knowledge and Skills
* Extensive experience of Python preferred, including advanced concepts like decorators, protocols, functools, context managers, and comprehensions.
* Strong understanding of SQL, database design, and data architecture.
* Experience with Databricks and/or Spark.
* Knowledgeable in data governance, data cataloguing, data quality principles, and related tools.
* Skilled in data extraction, joining, and aggregation tasks, especially with big data and real‑time data using Spark.
* Capable of performing data cleansing operations to prepare data for analysis, including transforming data into useful formats.
* Understanding of data storage concepts and logical data structures, such as data warehousing.
* Ability to write repeatable, production‑quality code for data pipelines, utilizing templating and parameterization where needed.
* Can make data pipeline design recommendations based on business requirements.
* Experience with data migration is a plus.
* Open to new ways of working and new technologies.
* Self‑motivated with the ability to set goals and take initiative.
* Driven to troubleshoot, deconstruct problems, and build effective solutions.
* Experience of Git / version control.
* Experience working with larger, legacy codebases.
* Understanding of unit and integration testing.
* Understanding and experience with CI/CD and general software development best practices.
* A strong attention to detail and a curiosity about the data you will be working with.
* A strong understanding of Linux‑based tooling and concepts.
* Knowledge and experience of Amazon Web Services is essential.
Rullion celebrates and supports diversity and is committed to ensuring equal opportunities for both employees and applicants.
#J-18808-Ljbffr