 
        
        Get AI-powered advice on this job and more exclusive features.
StackOne is the AI Integration Gateway powering the next generation of SaaS and AI Agents. Backed by GV, Workday Ventures, and angels/advisors from DeepMind, OpenAI, GitHub & Mistral, we’ve raised $24M to enable developers to orchestrate thousands of secure, scalable, and accurate actions in AI Agents. With an AI-native integration toolkit delivering real‑time execution, managed authentication, granular permissions, and full observability—all built with safety at its core—we’re doubling down on AI R&D, creating a lab to push the boundaries of tool calling for agents. We train specialized LLMs designed to outperform general‑purpose models in precision, reliability, and safety.
About the Role
You will help build a world where users of any agents can integrate with the tool of their choice in one click thanks to StackOne. We are looking for an AI Research Engineer with deep expertise in large‑scale model fine‑tuning, dataset curation, and training infrastructure. Unlike our AI Engineer role, this position focuses on pushing model performance through fine‑tuning, synthetic data pipelines, and large‑scale experimentation. You will own, design, and run experiments on cutting‑edge architectures, manage distributed training clusters, and curate and generate high‑quality datasets. This role sits closer to research/ML infrastructure than product engineering, with a strong mandate for applied, production‑ready results. You will work with the wider AI team and report directly to the CTO.
Responsibilities
 * Own the full lifecycle of model fine‑tuning projects (objectives, dataset preparation, training, evaluation, and deployment handoff).
 * Design and manage synthetic data generation workflows to augment real‑world datasets.
 * Build and maintain large‑scale training infrastructure (multi‑GPU/TPU clusters, orchestration, optimization).
 * Develop tools for dataset curation, labeling, filtering, and augmentation.
 * Conduct benchmarking and evaluations to measure fine‑tuning impact.
 * Collaborate with engineering to integrate fine‑tuned models into production stacks.
 * Stay ahead of research in parameter‑efficient fine‑tuning, synthetic data, and LLM training.
What We're Looking For
 * Background in deep learning with emphasis on LLMs.
 * Experience running large‑scale distributed training jobs.
 * Understanding of synthetic data techniques and dataset pipeline design.
 * Proficiency in evaluating LLMs with quantitative metrics and human evaluations.
 * Desire to work in a fast‑paced startup, taking ownership of projects end‑to‑end and bias toward shipping.
 * (Preferred) Contributions to open‑source ML libraries or published research in applied ML/LLM fine‑tuning.
Benefits
 * 25 days holiday + 1 additional day holiday per year of tenure.
 * Participation in the company’s employee share options plan.
 * Private health insurance (including dental & optical).
 * Health, fitness, and gift card discounts.
 * £1,000 for your home office setup + £500/year top‑up.
 * Paid lunch in the office.
 * Annual team offsite to sunny spots (last ones were in Spain and Portugal).
 * Join one of Europe’s fastest‑growing startups.
 * Work with a veteran team of ex‑employees of Google, Microsoft, Oracle, Coinbase, JP Morgan and more.
 * Cycle‑to‑Work and Electric Cars scheme.
 * Hybrid work set‑up – typically 2 days in the office.
Ready to help us change the game for SaaS integrations? Get in touch and let's chat!
We believe diversity drives innovation. We encourage individuals from all backgrounds to apply. As an equal‑opportunity employer, we celebrate diversity and are committed to creating an inclusive environment for all employees.
#J-18808-Ljbffr