Privacy Engineer III, Machine Learning
Bengaluru, Karnataka, India
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 2 years of experience designing solutions that maintain or enhance the privacy posture of the organization by analyzing and assessing proposed engineering designs (e.g., product features, infrastructure systems) and influencing stakeholders.
- 2 years of experience applying privacy technologies (e.g., differential privacy, automated access management solutions, etc.), and customizing existing solutions and frameworks to meet organizational needs.
Preferred qualifications:
- Experience managing multiple high priority requests, while determining resource allocation (e.g., time, prioritization) to solve the problems in a fast-paced, changing organization.
- Experience in end-to-end development of ML models and applications.
- Knowledge of common regulatory frameworks (e.g., GDPR, CCPA).
- Understanding of privacy principles, and a passion for keeping people and their data safe.
About the job
Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
The Governance team manages risk and compliance objectives, specifically risks about data, products, and software systems within Google. Our aim is to ensure that systems, products, and data are managed responsibly to keep our users, employees, and partners safe.
Google's innovations in AI, especially Generative AI, have created a new and exciting domain with immense potential. As innovation moves forward, Google and the broader industry need increased privacy, safety, and security standards for building and deploying AI responsibly.
To help meet this need, the Generative AI Assessments team's mission is to build up Google's assessment capabilities for generative AI applications.
The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
The Governance team manages risk and compliance objectives, specifically risks about data, products, and software systems within Google. Our aim is to ensure that systems, products, and data are managed responsibly to keep our users, employees, and partners safe.
Google's innovations in AI, especially Generative AI, have created a new and exciting domain with immense potential. As innovation moves forward, Google and the broader industry need increased privacy, safety, and security standards for building and deploying AI responsibly.
To help meet this need, the Generative AI Assessments team's mission is to build up Google's assessment capabilities for generative AI applications.
Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
The Governance team manages risk and compliance objectives, specifically risks about data, products, and software systems within Google. Our aim is to ensure that systems, products, and data are managed responsibly to keep our users, employees, and partners safe.
Google's innovations in AI, especially Generative AI, have created a new and exciting domain with immense potential. As innovation moves forward, Google and the broader industry need increased privacy, safety, and security standards for building and deploying AI responsibly.
To help meet this need, the Generative AI Assessments team's mission is to build up Google's assessment capabilities for generative AI applications.
Responsibilities
- Conduct privacy impact assessments and drive privacy outcomes for artificial intelligence datasets, models, products, and features
- Escalate critical and novel artificial intelligence risks to central and product leadership forums, as needed.
- Design and develop technical documentation across teams to drive consistent privacy decisions within the artificial intelligence domain.
- Work with internal tools and systems for understanding and assessing machine learning data and model lineage, properties, and risks.
* Salary range is an estimate based on our InfoSec / Cybersecurity Salary Index 💰
Tags: Artificial Intelligence CCPA Compliance GDPR Generative AI Governance Machine Learning Privacy Vulnerabilities
More jobs like this
Explore more InfoSec / Cybersecurity career opportunities
Find even more open roles in Ethical Hacking, Pen Testing, Security Engineering, Threat Research, Vulnerability Management, Cryptography, Digital Forensics and Cyber Security in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Information Security Officer jobs
- Open Information Security Specialist jobs
- Open Information Systems Security Officer (ISSO) jobs
- Open Senior Cyber Security Engineer jobs
- Open Ethical hacker / Pentester H/F jobs
- Open Cyber Security Architect jobs
- Open Cyber Security Specialist jobs
- Open Product Security Engineer jobs
- Open Manager Pentest H/F jobs
- Open Cybersecurity Analyst jobs
- Open Staff Security Engineer jobs
- Open Chief Information Security Officer jobs
- Open Consultant infrastructure sécurité H/F jobs
- Open Security Specialist jobs
- Open Senior Information Security Analyst jobs
- Open Senior Information Security Engineer jobs
- Open Cybersecurity Consultant jobs
- Open Senior Penetration Tester jobs
- Open Consultant SOC / CERT H/F jobs
- Open IT Security Analyst jobs
- Open Security Researcher jobs
- Open Sr. Security Engineer jobs
- Open Security Operations Analyst jobs
- Open Cybersecurity Specialist jobs
- Open IT Security Engineer jobs
- Open CISM-related jobs
- Open Windows-related jobs
- Open Network security-related jobs
- Open Pentesting-related jobs
- Open Agile-related jobs
- Open ISO 27001-related jobs
- Open Application security-related jobs
- Open GCP-related jobs
- Open Vulnerability management-related jobs
- Open Analytics-related jobs
- Open CISA-related jobs
- Open IAM-related jobs
- Open Threat intelligence-related jobs
- Open SaaS-related jobs
- Open APIs-related jobs
- Open Security assessment-related jobs
- Open Malware-related jobs
- Open Forensics-related jobs
- Open Java-related jobs
- Open Security Clearance-related jobs
- Open DevOps-related jobs
- Open CEH-related jobs
- Open IDS-related jobs
- Open EDR-related jobs
- Open Kubernetes-related jobs