Requisition ID:291201
:Relocation Authorized:None
:Telework Type:Part:Time Telework
:Work Location: London
Extraordinary teams building inspiring projects:
Since 1898, we have helped customers complete more than 25,000 projects in 160 countries on all seven continents that have created jobs, grown economies, improved the resiliency of the worlds infrastructure, increased access to energy, resources, and vital services, and made the world a safer, cleaner place.
Differentiated by the quality of our people and our relentless drive to deliver the most successful outcomes, we align our capabilities to our customers objectives to create a lasting positive impact. We serve the Infrastructure; Nuclear, Security and Environmental; Energy; Mining and Metals, and the Manufacturing and Technology markets. Our services span from initial planning and investment, through start:up and operations.
Core to Bechtel is our
Vision, Values and Commitments. They are what we believe, what customers can expect, and how we deliver. Learn more about our
extraordinary teams building inspiring projects in our
Impact Report.
Job Summary:
The Infrastructure AI and Data program is a cornerstone initiative designed to transform how our Global Business Unit (GBU) manages, governs, and leverages data to drive innovation and operational excellence. As we scale digital capabilities across complex engineering and delivery environments, the ability to harness data effectively is critical to enabling advanced analytics, AI:driven insights, and seamless collaboration across diverse stakeholders.
This role combines technical leadership with hands:on data engineering, focusing on designing and optimizing data pipelines, improving performance, and contributing to data modeling and architecture decisions. The Data Engineer will work with the Data Solutions Architect and functional SMEs to deliver scalable, secure, and high:quality data solutions.
This position is designated as part:time telework per our global telework policy and may require at least three days of in:person attendance per week at the assigned office or project. Weekly in:person schedules will be determined by the individual and their supervisor, in consultation with functional or project leadership".
Major Responsibilities:
:
Develop and maintain complex ETL/ELT pipelines to ingest and transform data from multiple sources into the UDP environment.
:
Performance tuning and optimization of data workflows within Azure Databricks.
:
Implement advanced data quality checks and validation routines to ensure integrity and consistency of datasets.
:
Collaborate with data solutions architect and functional SMEs to understand data requirements and deliver solutions aligned with business needs.
:
Contribute to data modeling and architecture decisions to support analytics and reporting needs.
:
Support integration of structured and semi:structured data into the lakehouse architecture.
:
Document processes and contribute to best practices for data engineering within the PIIM team.
:
Mentor junior engineers and promote best practices in coding, testing, and documentation.
Education and Experience Requirements:
:
Bachelor's degree in computer science or related field
:
Experience in data engineering or related role.
:
Strong proficiency in SQL and advanced experience with Python or Scala for data processing.
:
Hands:on experience with cloud:based data platforms (Azure preferred).
:
Solid understanding of data modeling, ETL/ELT processes, and performance optimization techniques.
:
Ability to lead technical discussions and provide guidance on data engineering best practices.
:
Ability to troubleshoot and optimize data workflows.
Required Knowledge and Skills:
:
Experience with Databricks or Spark:based environments, including optimization strategies.
:
Familiarity with data governance tools and frameworks (e.g., Unity Catalog).
:
E