Overview
As a Senior Data Engineer, you will develop and maintain our central data ecosystem, ensuring we extract maximum strategic value from our global data assets. You will be responsible for building, scaling, and governing high-value data products that serve as the "single source of truth" for team.blue and its 60+ sub-brands. This role transforms raw data into a sophisticated semantic layer that powers business intelligence, operational efficiency, and AI initiatives.
Key Responsibilities
* Data Product Innovation: Design and deliver high-quality, scalable data products that provide a unified source of truth across a complex ecosystem of 60+ global sub-brands.
* Pipeline Engineering: Build and optimize robust ETL/ELT pipelines to ingest and transform massive volumes of structured and unstructured data, ensuring high availability and low latency for downstream consumers.
* Architectural Leadership: Lead the design of secure, cost-efficient data architectures. Establish and enforce best practices in data modeling, orchestration, and observability.
* Governance & Integrity: Implement rigorous data governance frameworks, including lineage tracking, metadata management, and quality controls to ensure a reliable semantic layer for AI and ML use cases.
* Cross-Functional Collaboration: Act as a strategic "sparring partner" for Analytics, ML, and AI teams, translating complex business requirements into high-performing technical solutions.
* Platform Optimization: Manage and tune data platforms for peak performance, focusing on indexing strategies, query optimization, and schema evolution.
* Mentorship: Elevate the collective expertise of the engineering team by mentoring junior members and fostering a culture of technical excellence.
Your Strengths
* Strategic Thinker: You lead projects from ideation to actionable solution, balancing technical debt with rapid delivery.
* Effective Communicator: You translate complex technical concepts into clear business value for non-technical stakeholders.
* Problem Solver: You thrive in fast-paced environments, managing multiple high-priority workstreams with meticulous attention to detail.
Technical skills required
* Core Ecosystem: Deep hands-on experience with Databricks (PySpark, Delta Lake, Unity Catalog).
* Advanced SQL & Modeling: Expert-level SQL (optimization, indexing) and data modeling techniques (Dimensional, Star Schema, Snowflake).
* Cloud & Modern Data Stack: Proficiency in at least one major cloud provider (AWS, GCP, etc) and modern orchestration tools like Airflow or dbt.
* Database Diversity: Experience with various RDBMS (PostgreSQL, SQL Server, Oracle, etc.).
* DevOps & Engineering: Proficiency with Docker/Kubernetes and CI/CD practices.
* Governance: Experience with data versioning, schema evolution, and distributed metadata management.
Education and work experience
* Experience: 7+ years in data engineering or data management, preferably in a high-growth or multi-brand environment.
* Education: Advanced degree (Masters or PhD) in Computer Science, STEM, or a related quantitative field.
* Track Record: Demonstrable history of deploying stable, high-performance data solutions that drive measurable business value.
Right to Work
At any stage, please be prepared to provide proof of eligibility to work in the country you’re applying for. Unfortunately, we are unable to support relocation packages or sponsorship visas.
Diversity & Inclusion
Everyone is welcome here. Diversity & Inclusion are at our core. We value respect, openness, and trusted collaboration. We do not tolerate intolerance.
#J-18808-Ljbffr