We're building the infrastructure that makes autonomous AI safe for enterprise deployment. Not governance theatre. Not compliance checkboxes. Actual technical systems that can monitor, quantify, and govern AI agents operating with autonomy in production environments. If you've been following the trajectory from static models to agentic systems—and the corresponding explosion in risk surface area—you know why this matters now.
About governr
governr is the AI risk platform for regulated enterprises. We provide complete AI visibility, real-time risk eval and quantification, and audit-ready compliance docs for enterprises deploying agentic AI.
We've built the industry's most comprehensive AI risk assessment framework: We're currently in active discussions with tier-1 financial institutions and have secured design partners with leading firms navigating the shift from analytical AI to agentic systems.
The market timing is critical: enterprises are deploying agents at scale, regulators are demanding governance frameworks, and existing Third-Party Risk Management (TPRM) platforms have near-zero AI-risk depth. We have an estimated 18-24 month competitive window before large incumbents build comparable capabilities to stay relevant.
The Role
As an Agentic Developer at governr, you'll build the core systems that monitor, analyse, and govern autonomous AI agents in production. This isn't traditional software engineering with well-worn patterns. You're designing architectures for problems that didn't exist 18 months ago.
You'll be responsible for:
• Agent Monitoring Infrastructure: Build real-time systems that track agent behaviour across risk factors including agent-to-agent interactions, privilege escalation attempts, emergent capability detection, and behavioural drift from baseline parameters
• Risk Assessment Engine: Design algorithms that quantify AI risk in financial terms (£X exposure), map behaviours to regulatory requirements, and detect cascade failures before they propagate through multi-agent systems
• Protocol Integration: Implement monitoring for agent communication protocols (Model Context Protocol, Agent2Agent, Agent Connect Protocol) as they mature, ensuring authenticated and logged inter-agent communications
• Anomaly Detection: Develop ML systems that identify when agents exhibit unexpected strategies, attempt unauthorized actions, or drift from intended behaviour—catching issues before regulators do
• Evidence Generation Systems: Build architectures that automatically capture audit trails, decision provenance, and compliance evidence without impacting agent performance
Core Requirements (Non-negotiable)
Technical Depth: • 3 - 12+ years building production systems at scale • Expert-level proficiency in Python, Rust, or Go (you write systems that can't fail) • Deep understanding of distributed systems, real-time data processing, and observability architectures • Production ML/AI experience: You've deployed models, debugged their failures, and built monitoring around them • System design mastery: You can architect for reliability, scalability, and auditability simultaneously
Domain Knowledge: • Understanding of agent architectures: autonomous decision-making, goal-directed behaviour, tool use, memory systems • Familiarity with AI safety concepts: alignment, interpretability, robustness, adversarial examples • Experience with monitoring/observability: instrumentation, logging, tracing, alerting in complex systems
Working Style: • You ship to production regularly and own what you deploy • You write documentation that others can actually use • You thrive in ambiguity and define requirements through first principles • You communicate technical concepts clearly to non-technical stakeholders
Highly Valued (Differentiated Candidates)
• Publications or research in multi-agent systems, reinforcement learning, AI safety, or agent architectures • Experience at AI labs (Anthropic, OpenAI, DeepMind) or leading AI research groups
• Production experience with agents: LangChain, AutoGPT, CrewAI, or custom agent frameworks • Regulated industry background: Financial services, healthcare, or other domains with compliance requirements
• Security/threat modelling expertise: Understanding of adversarial AI, prompt injection, or system security • Real-time systems experience: Trading systems, fraud detection, or other low-latency critical infrastructure
• Open source contributions in relevant domains (AI frameworks, monitoring tools, security infrastructure)
Bonus Points (Nice to Have)
• Understanding of regulatory frameworks (EU AI Act, California AI, GDPR, DORA, FCA, OCC, FINRA guidance)
• Experience with compliance/audit systems or GRC platforms
• Background in formal verification or theorem proving
• DevSecOps or infrastructure-as-code expertise
• Track record of technical leadership or architecture ownership
Who Thrives Here
You read AI safety papers for fun. You've built production systems where failure has real consequences. You're excited by the idea of being the first person to solve a category of problems that didn't exist last year.
You care about getting it right, not just getting it done. When someone asks "can we monitor that?" your first instinct is to figure out how, not to explain why it's hard.
You're comfortable with the early-stage start-up reality: fewer guardrails, more ownership, rapid iteration, and direct impact on company trajectory. You want to define the product, not just implement specifications.
You're intellectually curious about the intersection of AI capabilities and regulatory constraints. You find it genuinely interesting that the EU AI Act requires "human oversight" but doesn't define what that means for autonomous agents.
You're mission-aligned: You believe AI agents will transform how organizations operate, but only if we solve the governance problem. You want to be the person who solves it.
Why Now / Why governr
Market Timing: Autonomous AI is moving from research to production. McKinsey estimates $2.6T-$4.4T annual value from agentic AI. 80% of organizations report risky agent behaviours. Regulators are demanding guardrail frameworks. The infrastructure layer doesn't exist yet. We're building it.
Technical Challenge: You'll solve problems that have no Stack Overflow answers. Agent-to-agent security. Cascade failure prevention across multi-agent systems. Emergent capability detection. Financial quantification of non-deterministic systems. This is legitimately hard, novel work.
Category Leadership: We have 18-24 months before large incumbents build comparable depth. First-mover advantage in category creation is real. You'll help define what "AI governance" means.
Team Quality: Co-founders with deep financial services + AI expertise. Advisors including Dr. Ayman Hindy, Marcel Cassard, and leading figures in AI, high frequency risk management and financial regulation. Early team of sharp, mission-driven builders.
Learning Curve: You'll gain expertise in cutting-edge AI architectures, enterprise software, regulatory frameworks, and category creation simultaneously. This is one of those roles you look back on as career-defining.
Impact Leverage: Every system you build enables safe AI adoption for enterprises managing billions in assets. Your code directly influences whether organizations can deploy autonomous agents responsibly.
What We Offer
Compensation: very competitive salary + equity in a fast-growing company with clear path to Series A
Team: Reporting to Co-Founder Chief Product Officer, working directly with founders on product direction
Work Environment: London, New York - focus on outcomes over presence
Growth: Ownership of major product areas from day one. Technical leadership opportunities as team scales. Direct exposure to customers, investors, and strategic decisions.
Resources: Modern tech stack, freedom to choose tools,
Mission: Build the infrastructure that makes autonomous AI safe for society. Not many teams can say their technical work has direct regulatory impact.
How to Apply
If you're excited about building unprecedented monitoring systems for autonomous agents, working at the intersection of AI safety and enterprise software, and defining an emerging category—let's talk.
Send your resume, GitHub/portfolio, and a brief note on what excites you about this problem space to rajen@governr.ai and thush@governr.ai.
Include: • Links to relevant work (code, papers, projects, or systems you've built) • What you're currently reading/learning in the AI agent space • One technical challenge you'd be excited to solve at governr
We review applications on a rolling basis and move quickly for strong candidates. First round is a technical conversation with the Co-Founder, not whiteboard hazing.
We're evaluating you as much as you're evaluating us. Come with questions about our technical approach, roadmap, or market positioning. The best conversations are mutual discovery.
governr is an equal opportunity employer. We believe diverse teams build better products and welcome applications from candidates of all backgrounds.
Ready to build the governance layer for autonomous AI? We're ready for you.