Requirements
Must have:
- Active DV (Developed Vetting) security clearance - Strong Python proficiency for data engineering and pipeline development - Linux systems administration and performance optimisation experience - Hands-on with Python orchestration frameworks (e.g., Dagster, Apache Airflow) - Serverless AWS experience (e.g., Lambda, Step Functions, Athena, Glue) - Docker containerisation - building/maintaining production-grade containers - Familiarity with Kubernetes for container orchestration and scaling - Knowledge of security, encryption, and compliance standards for secure networks - Experience delivering data platforms in high-security environments
Responsibilities:
- Design, manage, and optimise data ingestion pipelines and orchestration workflows - Develop and maintain end-to-end bespoke data projects from requirements to production - Monitor system performance and ensure high uptime across Linux-based infrastructure - Collaborate closely with data scientists, analysts, product owners, and engineering teams - Implement scalable data processing solutions handling complex, high-volume datasets - Drive continuous improvement in pipeline reliability, performance, and observability - Support cross-functional delivery of mission-critical intelligence platforms
Company:
We are seeking a detail-oriented and analytical Senior Data Engineer to join our high-performing, multi-disciplinary data team. In this fully on-site role, you will be essential in building and maintaining robust data pipelines and bespoke applications, ensuring that our data-driven projects deliver against demanding customer timelines and requirements. By joining our team, you will have the opportunity to work in a dynamic environment that values continuous improvement and collaboration.