Job Description
The Company\n\nThis business is a rail software and consulting company with a growing team and a solid foundation of project-based revenue. It works with leading organisations across the UK rail industry, helping them harness data to solve complex operational challenges.\n\nData Engineers are key to this mission - building robust data infrastructure and tooling that powers insights, analytics, and software products used across the rail network.\n\nThe Role\n\nAs a Data Engineer, you'll be part of a collaborative technical team, working across the data lifecycle: from designing ETL pipelines and integrating real-time data streams, to developing APIs and backend systems that deliver rail data securely and reliably.\n\nYou'll work closely with engineers, consultants, and project managers to translate real-world rail problems into scalable technical solutions. This role sits at the intersection of software engineering, data architecture, and delivery.\n\nKey Responsibilities\nData Engineering & Infrastructure\n• Design and implement robust data pipelines (batch and real-time) for ingesting, transforming, and serving rail-related datasets.\n• Develop and maintain data APIs and services to support analytics, software features, and reporting tools.\n• Build data models and storage solutions that balance performance, cost, and scalability.\n• Contribute to codebases using modern data stack technologies and cloud platforms (e.g., Azure, AWS).\nCollaborative Delivery\n• Work with domain consultants and delivery leads to understand client needs and define data solutions.\n• Participate in agile delivery practices, including sprint planning, reviews, and retrospectives.\n• Help shape end-to-end solutions — from ingestion and transformation to client-facing features and reporting.\nBest Practices & Growth\n• Write clean, well-documented, and tested code following engineering standards.\n• Participate in design reviews, code reviews, and collaborative development sessions.\n• Stay up-to-date with new tools and trends in the data engineering space.\n• Contribute to internal learning sessions, tech talks, and shared documentation.\n\nThe Candidate\n\nYou might be a good fit if you have experience with:\n• Building ETL/ELT pipelines using tools like Kafka, dbt, or custom frameworks.\n• Working with structured and unstructured data at scale.\n• Backend development in Python (or similar), and familiarity with data APIs.\n• Cloud data platforms (e.g., AWS Redshift, Azure Synapse).\n• SQL and database design for analytics, reporting, and product use.\n• Agile collaboration with cross-functional teams.\nYou don’t need experience in rail — just curiosity and a willingness to learn the domain