Content Adversarial Red Team Analyst

Hyderabad, Telangana, India; Bengaluru, Karnataka, India

Google

Google’s mission is to organize the world's information and make it universally accessible and useful.

View company page


Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 5 years of experience in data analysis and working with datasets.
  • Experience working on counter abuse strategies for online platforms.
  • Experience working with Large Language Models.

Preferred qualifications:

  • Experience working on product policy analysis and identifying policy risks.
  • Experience with modeling, experimentation, and causal inference.
  • Experience with adversarial testing of online consumer products.
  • Experience in Data Analysis and SQL.
  • Exceptional communication and presentation skills to deliver findings of analysis.

About the job

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

The Content Adversarial Red Team (CART) in Trust and Safety Intelligence is a new team that will use unstructured persona-based adversarial testing techniques to identify ‘unknown unknowns’ and new/unexpected loss patterns on Google’s premier generative AI products. The CART team will work alongside product, policy and enforcement teams to proactively detect harm patterns, and help build the safest possible experiences for Google users.

In this role, you will be at the forefront of generative AI testing in Trust and Safety, and support Google’s efforts to launch bold and responsible products in this space. You will be exposed to graphic, controversial, and/or upsetting content.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

Responsibilities

  • Write scripts that will help multiply the impact of the team, including through systematized/automated prompt creation and scraping of content.
  • Monitor and research emerging abuse vectors for generative AI from open web and specialized sources, and work individually and collaboratively to promptly uncover new risk vectors in Google’s main Generative AI products.
  • Apply insights for creative prompting of Google generative AI tools such as Gemini, Search Generative Experience, and Vertex API. Support the creation of persona-based adversarial playbooks to guide the team’s red teaming.
  • Develop repeatable processes that can yield valuable insights regardless of topic and attack vector. Annotate and cluster harm types detected in structured prompting exercises.
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our InfoSec / Cybersecurity Salary Index 💰

Tags: APIs Generative AI LLMs Malware Red team SQL

Region: Asia/Pacific
Country: India
Job stats:  9  1  0

More jobs like this

Explore more InfoSec / Cybersecurity career opportunities

Find even more open roles in Ethical Hacking, Pen Testing, Security Engineering, Threat Research, Vulnerability Management, Cryptography, Digital Forensics and Cyber Security in general - ordered by popularity of job title or skills, toolset and products used - below.