It's fun to work in a company where people truly BELIEVE in what they are doing!
Data Architect - Azure Data Engineering
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets. An ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work® Institute and recognized as a ‘Cool Vendor’ and a ‘Vendor to Watch’ by Gartner.
Please visit for more information about Fractal
Location: London
Core Technical Responsibilities
1. Design and build end-to-end data pipelines (batch and near real-time) using PySpark, Databricks, and Azure Data Platform (ADF, ADLS, Synapse)
2. Be hands-on in development, debugging, optimization, and production support of data pipelines
3. Work with or extend existing/proprietary ETL frameworks (e.g., Mar's Simpel or similar) and improve performance and reliability
4. Implement data modeling, transformation, and orchestration patterns aligned with best practices
5. Apply data engineering fundamentals including partitioning, indexing, caching, cost optimization, and performance tuning
6. Collaborate with upstream and downstream teams to ensure data quality, reliability, and SLAs
Architecture & Design
7. Contribute to the design of cloud-native data architectures covering ingestion, processing, storage, and consumption
8. Translate business and analytical requirements into practical, scalable data solutions
9. Support data governance practices including metadata, lineage, data quality checks, and access controls
10. Work within hybrid environments (on-prem to cloud) and support modernization initiatives
11. Understand and apply data mesh concepts where relevant (domain ownership, reusable data products, basic contracts)
12. Evaluate tools and frameworks with a build vs. buy mindset, recommending pragmatic solutions
Team & Delivery Responsibilities
13. Act as a technical anchor for a data engineering team
14. Provide technical guidance, code reviews, and mentoring to engineers
15. Own delivery for assigned data products or pipelines — from design through deployment
16. Collaborate with product owners, analysts, and architects to clarify requirements and priorities
Stakeholder & Communication
17. Engage with business and analytics stakeholders to understand data needs and translate them into technical solutions
18. Clearly communicate technical designs and trade-offs to both technical and non-technical audiences
19. Escalate risks and propose mitigation strategies proactively
20. Support documentation of architecture, pipelines, and operational processes
Good to Have
21. Exposure to AI/ML data workflows (feature engineering, model inputs, MLOps basics)
22. Awareness of LLMs / Agentic AI architectures from a data platform perspective
23. Experience with other platforms such as AWS, GCP, Snowflake, BigQuery, Redshift
24. Familiarity with data governance or catalog tools (DataHub, Collibra, dbt, etc.)
25. Experience working in CPG, Retail, Supply Chain, or similar domains
If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!
Not the right fit? Let us know you're interested in a future opportunity by clickingin the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest!