We are looking for a skilled mid-Level Data Engineer with a passion for building reliable and scalable data pipelines to power cutting-edge genAI products.
The ideal person would have strong commercial experience in real-time data engineering and cloud technologies, and be able to apply this expertise to business problems to generate value.
We currently work in an AWS, Snowflake, dbt, Looker, Python, Kinesis and Airflow stack and are building out our real-time data streaming capabilities using Kafka. You should be comfortable with these or comparable technologies.
As an individual contributor, you will take ownership of well-defined projects, collaborate with senior colleagues on architectural decisions, and contribute to improving data engineering standards, documentation, and team practice.
The successful candidate will join our cross functional development teams and actively participate in our agile delivery process. Our dynamic Data & AI team will also support you, and you will benefit from talking data with our other data engineers, data scientists, and ML and analytics engineers.
Responsibilities
1. Contribute to our data engineering roadmap.
2. Collaborate with senior data engineers on data architecture plans.
3. Managing Kafka in production
4. Collaborating with cross-functional teams to develop and implement robust, scalable solutions.
5. Supporting the...