About Supermodular AI
We help enterprises navigate AI Transformation with strategy through execution. We believe AI transformation cannot be designed purely in strategy documents; the real strategy emerges by building, deploying, and operating AI systems inside the organization.
Our team works directly with enterprise IT and software teams to design and implement AI systems in real environments, enabling engineering organisations to work in fundamentally new ways by introducing AI‑native development practices and custom AI systems to unblock difficult modernization or harmonization efforts.
Role Overview
As a Forward Deployed Security Engineer (AI Systems), you will work directly with enterprise IT and software teams to ensure AI systems operate securely, reliably, and safely in production environments.
You will be embedded in real enterprise environments, working alongside engineers and technical leaders to understand how systems operate today, identify vulnerabilities, design secure architectures, and harden systems that were not originally built to support AI.
Your work spans the full lifecycle of an engagement: shaping technical approaches early, ensuring security and reliability are built into the system design, and implementing safeguards required for systems to operate in real‑world conditions. Many environments are not clean—systems are incomplete, poorly integrated, or not designed with security in mind. Your role is to bring structure and resilience to these environments, building and hardening systems that operate safely at scale.
At its core, this role is about one thing: making AI systems work in production without breaking trust.
Desired Skills and Qualifications
* Thinking AI‑first with a strong understanding of how AI systems fail, break, and can be exploited in real‑world environments.
* Comfortable working at the edge of AI capabilities, understanding risks such as prompt injection, data leakage, unsafe tool use, and model misuse, and designing systems that mitigate them.
* Understanding that AI systems are probabilistic and introduce new failure modes; designing guardrails, evaluation loops, and controls to make them reliable and safe in production.
* Identifying and hardening weak points in existing systems, especially where security and architecture were not designed properly from the start.
* Strong software engineering foundation, with experience building and operating production systems in distributed or cloud‑native environments.
* Experience working across authentication, authorization, data access, and system boundaries, ensuring AI systems interact safely with enterprise infrastructure.
* Pragmatic and execution‑focused, able to improve security and reliability incrementally without blocking progress or over‑engineering solutions.
* Translating messy, high‑risk environments into secure, working systems even when requirements are incomplete or constantly evolving.
* Comfortable working directly with engineers, architects, and security teams inside enterprise environments, aligning on practical solutions.
* Practical mindset around reliability, observability, and security, focused on what actually matters in production rather than theoretical completeness.
#J-18808-Ljbffr