Luton/Hybrid
Job Purpose
With a big investment into Databricks, and with a large amount of interesting data, this is the chance for you to come and be part of an exciting transformation in the way we store, analyse and use data in a fast paced organisation.
You will join as a Senior Data Engineer providing technical leadership to the Data Engineering team. You will play a major role in adhering to big data best practices whilst building and maintaining reliable data pipelines and products across a range of exciting projects. Bringing a wealth of technical experience in big data platforms combined with excellent skills in development. This role will support the business on the journey from a greenfield big data estate to a mature data lead organisation
What you’ll need to do the role
· Develop robust, scalable data pipelines to serve the easyJet analyst and data science community.
· Building orchestration for data pipelines using tools such as Airflow, Jenkins and GitHub actions.
· Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala
· Help the business harness the power of data within easyJet, supporting them with insight, analytics and data science.
· Work in a fast-paced agile scrum environment with a release within sprint mentality.
· Coach and mentor the team to improve development standards.
· Work with Technical Architects to define patterns and standards for designs.
· Work with Business Analysts to deliver against requirements and realise business benefits.
· Build a documentation library and data catalogue for all data pipelines/products.
What you’ll get in return
· Competitive base salary
· Up to 20% bonus
· BAYE, SAYE & Performance share schemes
· Flexible benefits package
· Excellent staff travel benefits
Skills & Experience
· Technical Ability: has a high level of current, technical competence in relevant technologies, and be able to independently learn new technologies and techniques as our stack changes.
· Clear communication; can communicate effectively in both written and verbal forms with technical and non-technical audiences alike.
· Complex problem-solving ability; structured, organised, process-driven and outcome-oriented. Able to use historical experiences to help with future innovations.
· Significant experience designing and building data solutions on a cloud based, big data distributed system.
· Significant experience with Python, and experience with modern software development and release engineering practices (e.g. TDD, CI/CD).
· Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Arrow, MapR).
· Significant experience with SQL – comfortable writing efficient SQL.
· Experience using enterprise scheduling tools (e.g. Apache Airflow, Spring DataFlow, Control-M)
· Experience with Linux and containerisation
What you’ll get in return
· Competitive base salary
· Up to 20% bonus
· BAYE, SAYE & Performance share schemes
· Flexible benefits package
· Excellent staff travel benefits
Business Area
Primary Location
#J-18808-Ljbffr