AI Security Researcher

Pittsburgh

Applications have closed

SEI - Carnegie Mellon University

View company page

Are you a cybersecurity and/or AI researcher who enjoys a challenge? Are you excited about pioneering new research areas that will impact academia, industry, and national security? If so, we want you for our team, where you’ll collaborate to deliver high-quality results in the emerging area of AI security. 

The CERT Division of the Software Engineering Institute (SEI) is seeking applicants for the AI Security Researcher role. Originally created in response to one of the first computer viruses -- the Morris worm – in 1988, CERT has remained a leader in cybersecurity research, improving the robustness of software systems, and in responding to sophisticated cybersecurity threats.  Ensuring the robustness and security of AI systems is the next big challenge on the horizon, and we are seeking life-long learners in the fields of cybersecurity, AI/ML, or related areas, who are willing to cross-train to address AI Security.  

The Threat Analysis Directorate, is a group of security experts focused on advancing the state of the art in AI security at a national and global scale.  Our tasks include vulnerability discovery and assessments, evaluation of the effectiveness and robustness of AI systems, exploit discovery and reverse engineering, and identifying new areas where security research is needed. We participate in communities of network defenders, software developers and vendors, security researchers, AI practitioners, and policymakers.

You'll get a chance to work with elite AI and cybersecurity professionals, university faculty, and government representatives to build new methodologies and technologies that will influence national AI security strategy for decades to come. You will co-author research proposals, execute studies, and present findings and recommendations to our DoD sponsors, decision makers within government and industry, and at academic conferences. The SEI is a non-profit, federally funded research and development center (FFRDC) at Carnegie Mellon University.   

What you’ll do: 

  • Develop state of the art approaches for analyzing robustness of AI systems.  
  • Apply these approaches to understanding vulnerabilities in AI systems and how attackers adapt their tradecraft to exploit those vulnerabilities.  
  • Reverse engineer malicious code in support of high-impact customers, design and develop new analysis methods and tools, work to identify and address emerging and complex threats to AI systems, and effectively participate in the broader security community. 
  • Study and influence the AI security and vulnerability disclosure ecosystems.  
  • Evaluate the effectiveness of tools, techniques and processes developed by industry and the AI security research community.  
  • Uncover and shape some of the fundamental assumptions underlying current best practice in AI security.   
  • Develop models, tools and data sets that can be used to characterize the threats to, and vulnerabilities in, AI systems, and publish those results. You will also use these results to aid in the testing, evaluation and transition of technologies developed by government-funded research programs.  
  • Identify opportunities to apply AI to improve existing cybersecurity research. 

Who you are:  

  • You have a deep interest in AI/ML and cybersecurity with a penchant for intellectual curiosity and a desire to make an impact beyond your organization. 
  • You have practical experience with applying cybersecurity knowledge toward vulnerability research, analysis, disclosure, or mitigation. 
  • You have experience with advising on a range of security topics based on research and expert opinion. 
  • You have familiarity with implementing and applying AI/ML techniques to solving practical problems. 
  • You have familiarity with common AI/ML software packages and tools (e.g., Numpy, Pytorch, Tensorflow, ART). 
  • You have knowledge or familiarity with reverse engineering tools (e.g. NSA Ghidra, IDA Pro) 
  • You have experience with Python, C/C++, or low-level programming. 
  • You have experience developing frameworks, methodologies, or assessments to evaluate effectiveness and robustness of technologies. 
  • You have superb communication skills (oral and written), particularly regarding technical communications with non-experts. 
  • You enjoy mentoring and cross-training others and sharing knowledge within the broader community. 
  • You have BS in machine learning, cybersecurity, statistics, or related discipline with eight (8) years of experience; OR MS in the same fields with five (5) years of experience; OR PhD in the same fields with two (2) years of experience. 
  • Applicants with a solid technical background in AI/ML or cybersecurity, but not both, are encouraged to apply provided a strong desire to rapidly learn on the job. 

You are able to: 

  • Travel to various locations to support the SEI’s overall mission. This includes within the SEI and CMU community, sponsor sites, conferences, and offsite meetings on occasion (5%). 
  • You will be subject to a background check and will need to obtain and maintain a Department of Defense security clearance. 

Why work here?  

  • Join a world-class organization that continues to have a significant impact on software.  
  • Work with cutting-edge technologies and dedicated experts to solve tough problems for the government and the nation.  
  • Be surrounded by friendly and knowledgeable staff with broad expertise across AI/ML, cybersecurity, software engineering, risk management, and policy creation. 
  • Get 8% monthly contribution for your retirement, without having to contribute yourself.  
  • Get tuition benefits to CMU and other institutions for you and your dependent children.  
  • Enjoy a healthy work/life balance with flexible work arrangements and paid parental and military leave.  
  • Get access to university resources including mindfulness programs, childcare and back-up care benefits, a monthly transit benefit on WMATA, free transportation on the Pittsburgh Regional Transit System.  
  • Enjoy annual professional development opportunities; attend conferences and training or obtain a certification and get reimbursed for membership in professional societies.  
  • Qualify for relocation assistance and so much more.     

Location

Arlington, VA, Pittsburgh, PA

Job Function

Software/Applications Development/Engineering

Position Type

Staff – Regular

Full time/Part time

Full time

Pay Basis

Salary

More Information: 

  • Please visit “Why Carnegie Mellon” to learn more about becoming part of an institution inspiring innovations that change the world. 

  • Click here to view a listing of employee benefits

  • Carnegie Mellon University is an Equal Opportunity Employer/Disability/Veteran. 

  • Statement of Assurance

* Salary range is an estimate based on our InfoSec / Cybersecurity Salary Index 💰

Tags: C CERT Clearance DoD Exploit Ghidra Machine Learning PhD Python Reverse engineering Risk management Security Clearance Security strategy Strategy Vulnerabilities

Perks/benefits: Career development Conferences Parental leave Relocation support

Region: North America
Country: United States
Job stats:  12  0  0
Category: Research Jobs

More jobs like this

Explore more InfoSec / Cybersecurity career opportunities

Find even more open roles in Ethical Hacking, Pen Testing, Security Engineering, Threat Research, Vulnerability Management, Cryptography, Digital Forensics and Cyber Security in general - ordered by popularity of job title or skills, toolset and products used - below.