* Architect and implement backend services that support secure, air-gapped AI
deployments, with a focus on NLP-based tooling
* Develop pipelines for transcription ingestion and real-time analytical insight
generation
* Support graph and RAG-based inference layers using data from structured and
unstructured sources
* Build and expose APIs for frontend consumption, enabling natural-language
querying, dynamic visualisation, and reporting
* Ensure system portability via containerisation
* Collaborate closely with data scientists to integrate self-hosted LLMs and
analysis models into the backend infrastructure
* Work with platform engineers to tune deployments for hardware-limited
environments
What You Bring
* Strong experience building secure, scalable backend systems in Python and/or
Go
* Deep understanding of containerised services, particularly in Kubernetes
environments
* Practical knowledge of orchestration tools (e.g., Argo), message buses (Kafka),
and databases (especially Postgres), or equivalent technologies
* Experience working in air-gapped or secure environments, or at minimum, a
clear grasp of the constraints and workarounds involved
* Comfort designing APIs for real-time and batch AI/ML pipelines.
* Experience supporting or building graph-based systems (e.g., for RAG or
knowledge graph traversal) is a plus
Experience working with LLMs, vector stores, or tra...