Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Engineering analyst, content adversarial red team

London
Engineering
Posted: 20h ago
Offer description

Engineering Analyst, Content Adversarial Red Team _corporate_fare_ Google _place_ London, UK Advanced Experience owning outcomes and decision making, solving ambiguous problems and influencing stakeholders; deep expertise in domain. Minimum qualifications: Bachelor's degree or equivalent practical experience. 7 years of experience in Trust and Safety, risk mitigation, cybersecurity, or related fields. 7 years of experience with one or more of the following languages: SQL, R, Python, or C++. 6 years of experience in adversarial testing, red teaming, jailbreaking for trust and safety, or a related field, with a focus on AI safety. Experience with Google infra/tech stack and tooling, API and web service, Collab deployment, SQL and data handling, MLOps or other AI infrastructure. Preferred qualifications: Master's degree or PhD in a relevant quantitative or engineering field. Experience in an individual contributor role within a technology company, focused on product safety or risk management. Experience working closely with both technical and non-technical teams on dynamic solutions or automations to improve user safety. Understanding of AI systems/architecture including specific vulnerabilities, machine learning, and AI responsibility principles. Ability to effectively articulate technical concepts to both technical and non-technical stakeholders. Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels. About the job Fast-paced, dynamic, and proactive, YouTube's Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world. As a pioneering expert in AI Red Teaming with your technical proficiency, you will shape our sustainable, future-proofed approaches to adversarial testing of Google's generative AI products. In this role, you will design and direct red teaming operations, creating innovative methodologies to uncover novel content abuse risks. You will support the team in the design, development, and delivery of technical solutions to testing and process limitations. You will act as a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams to drive strategic initiatives. Be a mentor by fostering a culture of continuous learning and sharing your deep expertise in adversarial techniques. You will represent Google's AI safety efforts externally collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field. At Google we work hard to earn our users' trust every day. Trust & Safety is Google's team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google's products, protecting our users, advertisers, and publishers across the globe in over 40 languages. Responsibilities Influence across Product, Engineering, Research and Policy to drive the implementation of strategic safety initiatives. Be a key advisor to executive leadership on complex content safety issues, providing actionable insights and recommendations. Mentor and guide junior and executive analysts, fostering excellence and continuous learning within the team. Act as a subject matter expert, sharing deep knowledge of adversarial and red teaming techniques, and risk mitigation. Represent Google's AI safety efforts in external forums and conferences. Contribute to the development of industry-wide best practices for responsible AI development. Be exposed to graphic, controversial, or upsetting content. Bridge technical constraints and red teaming requirements by leading the design, development, and integration of novel platforms, tooling, and engineering solutions to support and scale adversarial testing. Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google'sApplicant and Candidate Privacy Policy (./privacy-policy). Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See alsoGoogle's EEO Policy (https://www.google.com/about/careers/applications/eeo/) ,Know your rights: workplace discrimination is illegal (https://careers.google.com/jobs/dist/legal/EEOC\_KnowYourRights\_10\_20.pdf) ,Belonging at Google (https://about.google/belonging/), andHow we hire (https://careers.google.com/how-we-hire/). If you have a need that requires accommodation, please let us know by completing ourAccommodations for Applicants form (https://goo.gl/forms/aBt6Pu71i1kzpLHe2). Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting. To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also https://careers.google.com/eeo/ and https://careers.google.com/jobs/dist/legal/OFCCP_EEO_Post.pdf If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form: https://goo.gl/forms/aBt6Pu71i1kzpLHe2.

Apply
Create E-mail Alert
Job alert activated
Saved
Save
Similar job
Director of software engineering
London
Capital One
Engineering
Similar job
Field service engineer
Chessington
Equals One
Field service engineer
Similar job
Engineering manager, r&d
London
SharkNinja
Engineering manager
See more jobs
Similar jobs
Engineering jobs in London
jobs London
jobs Greater London
jobs England
Home > Jobs > Engineering jobs > Engineering jobs > Engineering jobs in London > Engineering Analyst, Content Adversarial Red Team

About Jobijoba

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by Job Title
  • Jobs by Industry
  • Jobs by Company
  • Jobs by Location
  • Jobs by Keywords

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2025 Jobijoba - All Rights Reserved

Apply
Create E-mail Alert
Job alert activated
Saved
Save