Penetration Tester needed with hands-on experience in testing Generative AI systems, LLMs, or AI-driven bots. In this role, you will lead and support security assessments targeting traditional infrastructure and AI-powered systems, including prompt injection testing, model exploitation, adversarial ML, and AI supply chain vulnerabilities. You will collaborate with product, data science, and AI teams to identify and mitigate security weaknesses in novel AI-driven applications.
Responsibilities
* Conduct penetration tests on web applications, APIs, networks, and infrastructure, including AI-integrated systems.
* Perform red teaming and threat modelling exercises specifically targeting AI models (eg, LLMs, chatbot interfaces, vector databases, and orchestration frameworks like LangChain or AutoGen).
* Evaluate AI systems for prompt injection vulnerabilities, data leakage, model abuse, prompt chaining issues, and adversarial inputs.
* Work with development and AI teams to build secure-by-design systems, offering actionable remediation guidance.
* Conduct testing of model endpoints for issues such as insecure output handling, unauthorized access to functions, or data poisoning.
* Develop custom testing tools or use existing frameworks (eg, LLM Guardrails, OpenAI evals, or adversarial attack libraries like TextAttack or IBM's ART).
* Create detailed reports with findings, impact analysis, and recommendations for technical and non-technical stakeholders.
* Stay updated on the latest threats, vulnerabilities, and mitigations affecting generative AI systems and machine learning platforms.