Nicolas Papernot

Nicolas Papernot

Welcome! I am a research scientist at Google Brain working on the security and privacy of machine learning in Úlfar Erlingsson's group. I will join the University of Toronto and Vector Institute as an assistant professor and Canada CIFAR AI Chair in the Fall 2019. If you'd like to learn more about my research, I recommend reading the blog posts I co-authored on cleverhans.io.

I earned my Ph.D. in Computer Science and Engineering at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship in Security. Previously, I received my M.S. and B.S. in Engineering Sciences from the Ecole Centrale de Lyon; which I attended after completing my classe préparatoire at the Lycée Louis-le-Grand.

Address: 1600 Amphitheatre Pkwy, Mountain View, CA 94043, USA

Email: [email protected]

Twitter »  GitHub »  Google Scholar »

I am chairing a workshop on Security in ML at NIPS 2018. More details can be found at https://secml2018.github.io.

Internship opportunities Our team is looking for PhD students interested in interning with us at Google Brain to work on security and privacy in ML. Please apply to the Google internship program and send me an email if you are interested.

Full-time opportunities Our Google Brain Privacy and Security team is looking to recruit software engineers and research scientists interested in working at the intersection of machine learning, privacy and security. Please send me an email if you are interested.

Student and postdoc opportunities I will join the University of Toronto and Vector Institute as an assistant professor in the Fall 2019:

Research scientist opportunities The Vector Institute is looking for research scientists interested in working on machine learning. Research scientists at Vector can supervise fully-funded graduate students from affiliated universities, and collaborate freely with the other members of the Institute. Please see this post and send me an email if you are interested.

2018
2017
2016
2015
2014
  • Security and Science of Agility. P. McDaniel, T. Jaeger, T. F. La Porta, Nicolas Papernot, R. J. Walls, A. Kott, L. Marvel, A. Swami, P. Mohapatra, S. V. Krishnamurthy, I. Neamtiu. ACM Workshop on Moving Target Defense workshop

I co-author a blog on the security and privacy of machine learning with Ian Goodfellow at www.cleverhans.io. I also write blog posts unrelated to machine learning on Medium and keep track of them here.

When a recording of the talk is available, the title links to the corresponding video. The following two embedded videos highlight works representative of my research on privacy (left) and security (right) in machine learning.

2018
  • Title TBD (EPFL Applied Machine Learning Days)
  • A Marauder's Map of Security and Privacy in Machine Learning (AISec '18) keynote (technical report, slides)
  • Security and Privacy in Machine Learning (Google Launchpad Studio)
  • Security and Privacy in Machine Learning (MSR Cambridge AI Summer School)
  • Characterizing the Space of Adversarial Examples in Machine Learning (NVIDIA)
  • Characterizing the Space of Adversarial Examples in Machine Learning (2nd ARO/IARPA Workshop on AML)
  • Characterizing the Space of Adversarial Examples in Machine Learning (MIT-IBM Watson AI Lab)
  • Characterizing the Space of Adversarial Examples in Machine Learning (Microsoft Research Cambridge)
  • Characterizing the Space of Adversarial Examples in Machine Learning (University of Toronto)
  • Characterizing the Space of Adversarial Examples in Machine Learning (EPFL)
  • Characterizing the Space of Adversarial Examples in Machine Learning (University of Southern California)
  • Characterizing the Space of Adversarial Examples in Machine Learning (University of Michigan)
  • Characterizing the Space of Adversarial Examples in Machine Learning (Max Planck Institute for Software Systems)
  • Characterizing the Space of Adversarial Examples in Machine Learning (Columbia University)
  • Characterizing the Space of Adversarial Examples in Machine Learning (University of Virginia)
  • Characterizing the Space of Adversarial Examples in Machine Learning (Intel Labs)
  • Characterizing the Space of Adversarial Examples in Machine Learning (McGill University)
  • Characterizing the Space of Adversarial Examples in Machine Learning (University of Florida)
  • Security and Privacy in Machine Learning (Age of AI Conference)
  • Security and Privacy in Machine Learning (Bar Ilan University)
  • Security and Privacy in Machine Learning (IVADO)
  • Security and Privacy in Machine Learning (Ecole Polytechnique Montreal)
  • Security and Privacy in Machine Learning (Element AI)
  • Security and Privacy in Machine Learning (INRIA Data Institute) tutorial
2017
  • Security and Privacy in Machine Learning (IEEE WIFS 2017) tutorial
  • Lecture on Security and Privacy in Machine Learning (Prof. Trent Jaeger's computer security class, Penn State) lecture
  • Adversarial Machine Learning with CleverHans (ODSC West, joint tutorial with Nicholas Carlini) tutorial
  • Security and Privacy in Machine Learning (Georgian Partners annual summit)
  • Private Machine Learning with PATE (With the Best online conference)
  • Gradient Masking in Machine Learning (Adversarial Machine Learning Workshop, Stanford University)
  • Security and Privacy in Machine Learning (Ecole Centrale de Lyon)
  • Security and Privacy in Machine Learning (Oxford University)
  • Adversarial Machine Learning with CleverHans (ICML workshop on Reproducibility in ML) tutorial
  • Adversarial Examples in Machine Learning (AI with the Best, jointly with Patrick McDaniel)
  • Security and Privacy in Machine Learning (Deep Learning Summit Singapore)
  • Security and Privacy in Machine Learning (Microsoft Research Cambridge)
  • Security and Privacy in Machine Learning (University of Cambridge)
  • Adversarial Examples in Machine Learning (Stanford AI Salon, joint invitation with Ian Goodfellow) panel
  • Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (Stanford)
  • Adversarial Machine Learning (Data Mining for Cyber Security meetup)
  • Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (Symantec)
  • Adversarial Examples in Machine Learning (Usenix Enigma 2017)
  • Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (LeapYear)
  • Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (Immuta)
  • Machine Learning and Security (NSF 2017 Secure and Trustworth Cyberspace PIs Meeting) panel
2016
  • Security and Privacy in Machine Learning (Ecole Centrale de Lyon)
  • Adversarial Examples in Machine Learning (LinkedIn)
  • Adversarial Examples in Machine Learning (Stanford)
  • Adversarial Examples in Machine Learning (Berkeley)
  • Adversarial Examples in Machine Learning (joint talk with Ian Goodfellow at AutoSens, Brussels)
  • What role will AI play in the future of autonomous vehicles and ADAS? (AutoSens 2016) panel
  • Adversarial Examples in Machine Learning (Google)
2018
  • Recognized among top 30% reviewers for NIPS (Neural Information Processing Systems)
  • Wormley Family Graduate Fellowship
  • Outstanding Research Assistant Award (The Pennsylvania State University)
  • Student Travel Award (6th International Conference on Learning Representations)
2017
  • Student Travel Award (34th International Conference on Machine Learning)
  • Student Travel Award (5th International Conference on Learning Representations)
  • Best Paper Award (5th International Conference on Learning Representations)
2016
2015
  • Microsoft CyberSpace 2025 Essay Contest - 2nd place
2010
  • Scholarship for Exceptional Academic Achievements (McGill) [declined]
Program committee member

Conferences: AAAI (2019), ACSAC (2018), AsiaCCS (2018), CCS (2019, 2018), GameSec (2018), NDSS (2018), IEEE S&P "Oakland" (2019), PETS (2019), USENIX Security (2019)

Workshops: Deep Learning and Security at IEEE Security & Privacy (2018), Privacy and Security at CVPR (2018), Dependable and Secure Machine Learning colocated with DSN (2018)

Organizing committee
  • NIPS workshop on Security in ML (2018) chair
  • NIPS competition on adversarial ML (2018)
  • NIPS workshop on Secure ML (2017)
  • Self-Organizing Conference on Machine Learning "SOCML" (2017)
  • With the Best online conference on Cybersecurity (2017)
Reviewer

Conferences: ACM WiSec (2016), DIMVA (2016), ICML (2019, 2018, 2017), ICLR (2019), IEEE Security & Privacy "Oakland" (2017, 2018), NIPS (2018, 2017), USENIX Security (2018)

Journals: IEEE Transactions on Dependable and Secure Computing (2017), IEEE Transactions on Information Forensics and Security (2017), IEEE Pervasive special issue on "Securing the IoT" (2017), Journal of Computer Security (2018)

Funding: Agence Nationale de la Recherche (2017), AI Xprize (2017-), Google Faculty Research Awards (2018, 2017)

Other: IEEE Security & Privacy Magazine (2017)

Invited participant
  • "When Humans Attack" workshop at the Data & Society Research Institute (2018)
  • ARO/IARPA Workshop on Adversarial Machine Learning, University of Maryland (2018)
  • ARO Workshop on Adversarial Machine Learning, Stanford (2017)
  • DARPA Workshop on Safe Machine Learning, Simons Institute (2017)
Defense committee member
  • Ryan Sheatsley (M.Sc., Pennsylvania State University)
2018
2017
2016