Lead the development of cutting-edge data infrastructure for financial services, enabling AI agents to build, maintain, and deliver datasets. Opportunity to shape the future of data services in the AI-driven landscape. You will take ownership of the core data pipeline that captures and manages our base data.
We’re a small team building AI agents that extract, structure, and validate financial datasets from unstructured documents. Annual reports, sustainability disclosures, regulatory filings — the messy stuff that institutions need turned into clean, reliable data. Our datasets inform decisions that move millions of dollars.
What this role is really about
You’re not applying because the job market is tough. You’re applying because you’re genuinely energised by the AI agent space. Maybe you’ve already got OpenClaw up and running as your personal assistant. Maybe you spend your evenings pushing agentic tooling to its limits and breaking things on purpose. That kind of person.
You’ll be joining a team where everyone ships. There’s no layer between you and the work. There’s no one to hand things off to. You’ve done this before — startup, or a small function inside something bigger, two to thirty people — and you know what that pace feels like.
Your relationship with data
You care about data quality in a ‘this inconsistency is going to bother me until I fix it’ way. A missing data point isn’t trivial to you. An anomaly isn’t something to gloss over. You have integrity about data — you’re persistent, thorough, and you can show us the projects where you cared about getting it right.
You think end-to-end, from raw input to reliable output. Maybe you’ve built production pipelines. Maybe you’ve done serious data analysis work. Maybe you’re earlier in your career but you pick things up fast and you’re hungry to go deep. Either way, you understand what good data infrastructure looks like.
You care about the outcome for the user, not the elegance of the system. Shipping great data as a product is the thing, not the architecture diagram.
How you work
You use Claude Code or Codex daily — as base infrastructure, not a novelty. If you’ve been deep in the agentic tooling ecosystem — building, testing what breaks — even better. Talk to us about Gas Town.
You run with your strengths. When you hit something unfamiliar, you say so — then figure it out. You don’t wait for tickets. Confidence without overcommitment. Know what you don’t know.
Experience
Three to five years is the likely range. Could be less if you’re sharp. Could be more if you’ve kept moving.
Skills that matter
* Python
* Postgres
* SQLite
* Async
* Queues and task managers
* Local servers
* File I/O
* Cron jobs
* Web scraping
* LLMs in production or serious side projects
* Data pipeline architecture
* ETL/ELT patterns
* Data validation and quality monitoring
Skills we won’t optimise for
* Pandas wizardry
* PyTorch
* R
* ML model training from scratch — although fine-tuning familiarity with local models is a plus
#J-18808-Ljbffr