Social network you want to login/join with:
col-narrow-left
Client:
Arrow Global Group
Location:
Manchester, United Kingdom
Job Category:
Other
-
EU work permit required:
Yes
col-narrow-right
Job Reference:
baab56316b42
Job Views:
3
Posted:
19.05.2025
Expiry Date:
03.07.2025
col-wide
Job Description:
Description
Our Data Engineer will help to build and scale the data infrastructure that powers our AI products. This role is hands-on and technically deep — ideal for someone who cares about data quality, robustness, and automation. Working closely with AI engineers to design pipelines that do more than move data — they clean, enrich, and understand it, increasingly using large language models and agents to automate complex steps in the process.
About the Team
This role will be part of a team that is flat-structured, best-idea-wins culture and where engineers shape product direction. We operate a supportive culture that values ownership as we want people who take responsibility but aren’t afraid to ask for help where needed. Whilst our offices and extend teams are based in Manchester and London, we also offer flexibility to work from anywhere (UK) for this role — though we’re Europe-focused and love getting together for hackathons and team problem-solving when it matters.
About the role
* Building and maintaining data pipelines in Python, with a focus on reliability, transparency, and scale.
* Using LLMs to assist with data cleansing, enrichment, classification, and contextual tagging.
* Experimenting with AI agents to automate complex research tasks and structured data extraction.
* Working with product and AI engineering teams to feed trustworthy data into fast-moving prototypes.
* Designing workflows that transform noisy, semi-structured data into actionable insight.
* Supporting experimentation and iteration — shipping fast and learning from what works.
What we're looking for & more
* Strong proficiency in Python and pandas (or Polars), and a track record of delivering working data systems.
* Experience with common data formats (JSON, XML, CSV) and transforming unstructured data.
* Familiarity with modern cloud-native tooling (we use AWS — especially Lambda and Step Functions).
* Interest or experience in using LLMs for tasks like data enrichment or transformation.
* A mindset that treats pipelines as products — robust, debuggable, and always improving.
* Curiosity about how AI can go beyond the model — helping automate research and discovery.
It would be beneficial (not essential) if you have experience with tools like LangChain, Haystack, Pandas AI, or vector databases as well as any prior projects involving agents for data understanding or research automation.
Sound like you? Great! Whilst a CV tells us part of your story, we would love to see a short summary about you with any relevant links to Loom - following this we will reach out to you for a teams interview if suitable.
Whilst this position can be done from anywhere in the UK,you must already hold the relevant right to work in the UK as we unfortunately can't provide sponsorship for the role.
Great talent comes in many forms, and we’re committed to building a diverse and inclusive team. Whilst a number of our roles do require specific qualifications and experience, and industry knowledge, we also value potential, unique perspectives, and transferable skills. If you’re excited about this opportunity but don’t meet every requirement, we’d still love to hear from you.
We occasionally collaborate with recruitment agencies to fill niche or specialist roles. However, we do not accept agency terms or pay fees for speculative CVs submitted directly to our hiring managers or outside our Applicant Tracking System.
If you are a recruitment agency interested in partnering with us for candidate supply, please reach out to [emailprotected]
#J-18808-Ljbffr