About the Role
The Data Engineer will play a crucial role in designing and implementing robust data pipelines, ensuring the integrity and accessibility of data across various platforms.
Required Skills
* Proficient in PySpark and AWS
* Strong experience in designing, implementing, and debugging ETL pipelines
* Expertise in Python, PySpark, and SQL
* In-depth knowledge of Spark and Airflow
* Experience in designing data pipelines using cloud-native services on AWS
* Extensive knowledge of AWS services
* Experience in deploying AWS resources using Terraform
* Hands-on experience in setting up CI/CD workflows using GitHub Actions
Preferred Skills
* Experience with additional cloud platforms
* Familiarity with data governance and compliance standards
Pay range and compensation package
GBP 70k PA REMOTE in UK
```