Req ID:
Competitive salary UK/Glasgow: hybrid working model (2:3 days on site)
At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company's growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here.
NTT DATA is currently looking for a Data Engineer for our growing team in the UK.
Overview:
NTT DATA is seeking a highly skilled Data Engineer with over 4+ years of experience to join our team to help a strategic banking client in various data transformation activities.
Key Responsibilities:
:Collaborating with cross:functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and Databricks
:Developing and deploying ETL jobs that extract data from various sources, transforming them to meet business needs.
:Taking ownership of the end:to:end engineering lifecycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency.
:Creating and managing data pipelines, ensuring proper error handling, monitoring and performance optimizations
:Working in an agile environment, participating in sprint planning, daily stand:ups, and retrospectives.
:Conducting code reviews, providing constructive feedback, and enforcing coding standards to maintain a high quality.
:Developing and maintaining tooling and automation scripts to streamline repetitive tasks.
:Implementing unit, integration, and other testing methodologies to ensure the reliability of the ETL processes
:Utilizing REST APIs and other integration techniques to connect various data sources
:Maintaining documentation, including data flow diagrams, technical specifications, and processes.
:Designing and implementing tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack.
:Collaborate seamlessly across diverse technical stacks, including Databricks, Snowflake, etc.
:Developing various components in Python as part of a unified data pipeline framework.
:Contributing towards the establishment of best practices for the optimal and efficient usage of data across various on:prem and cloud platforms.
:Assisting with the testing and deployment of our data pipeline framework utilizing standard testing frameworks and CI/CD tooling.
:Monitoring the performance of queries and data loads and perform tuning as necessary.
:Providing assistance and guidance during QA and UAT phases to quickly confirm the validity of potential issues and to determine the root cause and best resolution of verified issues.
:Adhere to Agile practices throughout the solution development process.
:Design, build, and deploy databases and data stores to support organizational requirements.Skills / Qualifications:
:4+ years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc.
:3+ years hands:on experience with cloud services, especially Databricks, for building and managing scalable data pipelines
:3+ years of proficiency in working with Snowflake or similar cloud:based data warehousing solutions
:3+ years of experience in data development and solutions in highly complex data environments with large data volumes.
:Solid understanding of ETL principles, data modelling, data warehousing concepts, and data integration best practices
:Familiarity with agile methodologies and the ability to work collaboratively in a fast:paced, dynamic environment.
:Experience with code versioning tools (e.g., Git)
:Knowledge of Linux operating systems
:Familiarity with REST APIs and integra