Senior Full-Stack Engineer — AI Document Generation Adema AI | SH Proptech Limited | Remote (UK) | Contract Next.js · Python · LLM / RAG · TypeScript
About Adema AI Adema AI is a full-stack, AI-native platform operating in a regulated, document-heavy industry. We build production tooling that automates professional knowledge work — replacing manual, expert-driven workflows with structured AI pipelines that draw on large, jurisdiction-specific policy corpora. Our stack is modern, our domain is complex, and our bias is toward shipping.
The Role We are hiring a senior engineer to build and ship a core AI generation product on our platform. The feature takes structured inputs from users and produces long-form, policy-grounded professional documents suitable for regulated submission — assembled dynamically from templates, retrieved policy context, and LLM-generated narrative.
This is a build-and-ship role. Product specification, domain framework, and subject-matter expertise are already in-house. What we need is someone who can take a well-defined spec and turn it into a working, deployed, production-quality product — fast.
What You Will Build
* A dynamic input layer that captures structured, context-dependent user inputs with conditional logic driven by use-case type
* An LLM generation pipeline (Claude / GPT-4) producing structured, professional long-form output grounded in a retrieved knowledge base — prompt engineering, structured output, context management
* A RAG architecture over a substantial domain corpus: ingestion, chunking, embedding, vector retrieval, and context assembly using pgvector or equivalent
* Multi-template document generation with conditional logic across several distinct output variants, each with its own structure, tone, and policy grounding
* Production-grade document output in PDF and Word formats, rendered from structured data and ready for professional use
* Full integration into the existing Adema AI platform — Next.js frontend, Supabase backend, Auth0, Vercel deployment
Requirements
Must have
* Strong full-stack experience — Next.js / React / TypeScript frontend, Python (FastAPI) or Node backend
* Hands-on experience building LLM-powered applications in production: prompt engineering, RAG, structured output, context window management, evals
* Experience with document generation pipelines — producing PDF and Word output from structured data, not just HTML previews
* Comfort working with large, complex domain corpora — ingesting, structuring, and retrieving from detailed reference material
* A track record of shipping production features, not prototypes or demos
* Fully independent delivery — you can own a spec end-to-end, make the right technical calls without hand-holding, unblock yourself, and ship. No one will be sitting next to you telling you what to do
* Genuinely world-class. We are only interested in hearing from the best of the best — engineers who are exceptional at what they do and can prove it. If that is not you, this is not the role
* A genuine passion for speed of delivery. Only apply if you are energised by shipping fast, iterating publicly, and getting product in front of users quickly. This is not a role for slow, heavily-scoped, waterfall delivery
Ideal
* Experience with Supabase, Vercel, and modern deployment stacks
* Familiarity with regulated or policy-heavy document generation (legal, planning, compliance, financial)
* Experience building multi-template document generators with conditional logic
* Working knowledge of RAG architectures using vector databases (pgvector, Pinecone, or similar)
* Hands-on experience with the Claude API / Anthropic SDK
Tech Stack
* Frontend: Next.js, React, TypeScript, Tailwind CSS
* Backend: Python (FastAPI) and/or Node.js
* Database: Supabase (PostgreSQL), pgvector
* AI: Claude API (Anthropic), OpenAI API, RAG pipeline
* Deployment: Vercel, GitHub Actions CI/CD
* Auth: Auth0
Engagement Structure The most important thing to us is proof of delivery. Everything else — structure, terms, arrangement — is flexible for the right person.
* Structure: Employment preferred, but we are flexible on the arrangement for the right candidate
* Compensation: £70k–£100k equivalent, subject to performance
* Equity: available for the right candidate — we are open to meaningful equity participation for someone who becomes core to the business
* Location: Fully remote, UK-based preferred
* Start: Immediate
Initial proof of performance. Before any arrangement is confirmed, we will require an initial proof of performance / capability — a short, paid, scoped piece of work that demonstrates your ability to deliver against our spec at the pace and quality we expect. This is the single most important part of the process. It runs both ways: it shows us you can ship, and it shows you whether our pace and standards suit how you work.
How We Work
* Small, senior team — no layers of management between you and the product
* Direct access to the CEO and domain experts
* Sprint-based delivery with clear milestones
* We value shipping over talking — show us what you can build
How to Apply Send the following to :
* Your CV or a link to your portfolio / GitHub
* A brief note (max 500 words) on how you would approach building an AI-powered long-form document generator grounded in a domain-specific knowledge corpus
* One example of a production LLM-powered feature you have built and shipped
No agencies. No cover letter templates. We want to see how you think — and how fast you move.
Adema AI is a trading name of SH Proptech Limited (Company Number 15518685). Registered office: 3rd Floor 45 Albemarle Street, Mayfair, London, W1S 4JL.