Content Adversarial Red Team Analyst
Hyderabad, Telangana, India; Bengaluru, Karnataka, India
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 5 years of experience in data analysis and working with datasets.
- Experience working on counter abuse strategies for online platforms.
- Experience working with Large Language Models.
Preferred qualifications:
- Experience working on product policy analysis and identifying policy risks.
- Experience with modeling, experimentation, and causal inference.
- Experience with adversarial testing of online consumer products.
- Experience in Data Analysis and SQL.
- Exceptional communication and presentation skills to deliver findings of analysis.
About the job
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
The Content Adversarial Red Team (CART) in Trust and Safety Intelligence is a new team that will use unstructured persona-based adversarial testing techniques to identify ‘unknown unknowns’ and new/unexpected loss patterns on Google’s premier generative AI products. The CART team will work alongside product, policy and enforcement teams to proactively detect harm patterns, and help build the safest possible experiences for Google users.
In this role, you will be at the forefront of generative AI testing in Trust and Safety, and support Google’s efforts to launch bold and responsible products in this space. You will be exposed to graphic, controversial, and/or upsetting content.
At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.
Responsibilities
- Write scripts that will help multiply the impact of the team, including through systematized/automated prompt creation and scraping of content.
- Monitor and research emerging abuse vectors for generative AI from open web and specialized sources, and work individually and collaboratively to promptly uncover new risk vectors in Google’s main Generative AI products.
- Apply insights for creative prompting of Google generative AI tools such as Gemini, Search Generative Experience, and Vertex API. Support the creation of persona-based adversarial playbooks to guide the team’s red teaming.
- Develop repeatable processes that can yield valuable insights regardless of topic and attack vector. Annotate and cluster harm types detected in structured prompting exercises.
* Salary range is an estimate based on our InfoSec / Cybersecurity Salary Index 💰
Tags: APIs Generative AI LLMs Malware Red team SQL
More jobs like this
Explore more InfoSec / Cybersecurity career opportunities
Find even more open roles in Ethical Hacking, Pen Testing, Security Engineering, Threat Research, Vulnerability Management, Cryptography, Digital Forensics and Cyber Security in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Information Security Officer jobs
- Open Information Systems Security Officer (ISSO) jobs
- Open Information Security Specialist jobs
- Open Ethical hacker / Pentester H/F jobs
- Open Senior Cyber Security Engineer jobs
- Open Cyber Security Architect jobs
- Open Product Security Engineer jobs
- Open Cyber Security Specialist jobs
- Open Cybersecurity Analyst jobs
- Open Chief Information Security Officer jobs
- Open Manager Pentest H/F jobs
- Open Staff Security Engineer jobs
- Open Security Specialist jobs
- Open Senior Information Security Analyst jobs
- Open Consultant infrastructure sécurité H/F jobs
- Open Cybersecurity Consultant jobs
- Open IT Security Analyst jobs
- Open Consultant SOC / CERT H/F jobs
- Open Senior Information Security Engineer jobs
- Open Senior Penetration Tester jobs
- Open IT Security Engineer jobs
- Open Security Operations Analyst jobs
- Open Sr. Security Engineer jobs
- Open Cybersecurity Specialist jobs
- Open Security Researcher jobs
- Open CISM-related jobs
- Open Windows-related jobs
- Open Network security-related jobs
- Open Pentesting-related jobs
- Open Agile-related jobs
- Open Application security-related jobs
- Open ISO 27001-related jobs
- Open GCP-related jobs
- Open Vulnerability management-related jobs
- Open Threat intelligence-related jobs
- Open CISA-related jobs
- Open Analytics-related jobs
- Open IAM-related jobs
- Open Security assessment-related jobs
- Open Malware-related jobs
- Open SaaS-related jobs
- Open APIs-related jobs
- Open Security Clearance-related jobs
- Open Java-related jobs
- Open Forensics-related jobs
- Open IDS-related jobs
- Open CEH-related jobs
- Open DevOps-related jobs
- Open EDR-related jobs
- Open DoD-related jobs