Location: London
Contract Type: Contract
Working Model: Hybrid
Overview:
We are seeking an experienced Data Architect to join our growing data team and lead the design and implementation of scalable, secure, and high-performance data solutions. You’ll play a key role in architecting modern data platforms using Snowflake, SQL, Python, and leading cloud technologies to support advanced analytics, reporting, and machine learning initiatives across the business.
Key Responsibilities:
* Design and maintain end-to-end data architectures, data models, and pipelines in Snowflake and cloud platforms (AWS, Azure, or GCP).
* Develop and optimize scalable ELT/ETL processes using SQL and Python.
* Define data governance, metadata management, and security best practices.
* Collaborate with data engineers, analysts, product managers, and stakeholders to understand data needs and translate them into robust architectural solutions.
* Oversee data quality, lineage, and observability initiatives.
* Recommend and implement performance tuning for large-scale data sets.
* Ensure platform scalability, cost-efficiency, and system reliability.
Required Skills & Experience:
* Proven experience as a Data Architect or Senior Data Engineer working on cloud-native data platforms.
* Strong hands-on experience with Snowflake – data modeling, performance tuning, security configuration, and data sharing.
* Proficiency in SQL for complex querying, optimization, and stored procedures.
* Strong coding skills in Python for data transformation, scripting, and automation.
* Experience with cloud platforms such as AWS (e.g., S3, Redshift, Lambda), Azure (e.g., Data Factory, Synapse), or GCP (e.g., BigQuery, Cloud Functions).
* Familiarity with data orchestration tools (e.g., Airflow, dbt) and version control (Git).
* Solid understanding of data governance, security, and compliance frameworks.
Nice to Have:
* Experience with data lake architectures (Delta Lake, Lakehouse).
* Familiarity with BI/visualization tools (Tableau, Power BI, Looker).
* Knowledge of streaming data tools (Kafka, Kinesis).
* Background in supporting ML/AI pipelines or data science environments.