Salary: £85,000 - 95,000 per year Requirements: Proven track record building scalable data pipelines (batch and streaming) in production Expert Python, PySpark, Celery and RabbitMQ skills; deep experience with AWS data stack (Glue, OpenSearch, RDS) Expert skills within SQL with experience in both transactional RDBMS systems and distributed systems Hands-on with Lakehouse technologies (Apache Iceberg, S3 Tables, StarRocks) Strong grasp of data governance, schema design, and quality frameworks Comfortable leading infrastructure decisions and collaborating across distributed teams Responsibilities: Collaborate with product managers and business stakeholders to understand complex business requirements and translate them into well-designed and maintainable solutions Ensure data quality and reliability by implementing robust data quality checks, monitoring, and alerting to ensure the accuracy and timeliness of all data pipelines Create data governance policies and develop data models and schemas optimized for analytical workloads Influence the direction for key infrastructure and framework choices for data pipelining and data management Manage complex initiatives by setting project priorities, deadlines, and deliverables Collaborate effectively with distributed team members across multiple time zones, including offshore development teams Technologies: AWS OpenSearch Celery Python PySpark RabbitMQ SQL Cloud More: We are a fast-growing and exciting business based in Central London, offering a mainly remote role for a Senior Data Engineer. We are looking for an experienced Data Developer who is not only technically skilled but also a good people person, capable of working with client-facing teams and mentoring junior members across Europe. As our company continues to grow, there will be opportunities for upward mobility in your career. We provide a supportive environment where your contributions matter. last updated 4 week of 2026