At Dynamo AI, an ML Research Scientist Intern will focus on advancing the state of AI evaluation, adversarial robustness, and agent security. You will contribute to novel research in model safety, hallucination detection, red-teaming, runtime guardrails, and AI risk remediation across DynamoGuard, DynamoEval, and AgentWarden.
Responsibilities
* Conduct research on LLM evaluation methodologies, adversarial attack generation, and model robustness.
* Develop novel techniques for detecting hallucinations, policy violations, prompt injection, and agent misalignment.
* Design experiments to evaluate AI systems under real-world enterprise constraints.
* Contribute to research artifacts, including internal technical reports, benchmarking frameworks, and potentially publications.
* Collaborate with engineering teams to transition research innovations into deployable guardrails and runtime protections.
* Analyze large-scale model behavior data to uncover systematic vulnerabilities and improvement opportunities.
Qualifications
* Currently pursuing a Master's or PhD in Machine Learning, Artificial Intelligence, Computer Science, or related field.
* Strong theoretical foundation in ML, NLP, or deep learning.
* Experience working with large language models or generative AI systems.
* Familiarity with adversarial ML, AI safety, or model evaluation frameworks is a strong plus.
* Demonstrated ability to design rigorous experiments and analyze results critically.
* Passion for advancing secure and production-grade AI systems.