Nicolas Papernot

Nicolas Papernot

Welcome! I earned my Ph.D. in Computer Science and Engineering at the Pennsylvania State University, working with Prof. Patrick McDaniel on the security and privacy of machine learning. If you'd like to learn more about my research, I recommend reading the blog posts I co-authored on cleverhans.io. I am also a Google PhD Fellow in Security. Previously, I received my M.S. and B.S. in Engineering Sciences from the Ecole Centrale de Lyon in France; which I attended after completing my classe préparatoire at the Lycée Louis-le-Grand in Paris. This website covers some of my background and current work. Feel free to contact me directly for more information.

Address: W336 Westgate Building, University Park, PA 16802, USA

Email: [email protected]

Twitter »  GitHub »  Google Scholar »

In press
  • The challenges of making machine learning robust against adversarial inputs. Ian Goodfellow, Patrick McDaniel, Nicolas Papernot. Communications of the ACM (July 2018) column
2018
2017
2016
2015
2014
  • Security and Science of Agility. P. McDaniel, T. Jaeger, T. F. La Porta, Nicolas Papernot, R. J. Walls, A. Kott, L. Marvel, A. Swami, P. Mohapatra, S. V. Krishnamurthy, I. Neamtiu. ACM Workshop on Moving Target Defense workshop

I co-author a blog on the security and privacy of machine learning with Ian Goodfellow at www.cleverhans.io. I also write blog posts unrelated to machine learning on Medium and keep track of them here.

When a recording of the talk is available, the title links to the corresponding video. The following two embedded videos highlight works representative of my research on privacy (left) and security (right) in machine learning.

2018
  • Characterizing the Space of Adversarial Examples in Machine Learning (NVIDIA)
  • Characterizing the Space of Adversarial Examples in Machine Learning (2nd ARO/IARPA Workshop on AML)
  • Characterizing the Space of Adversarial Examples in Machine Learning (MIT-IBM Watson AI Lab)
  • Characterizing the Space of Adversarial Examples in Machine Learning (Microsoft Research Cambridge)
  • Characterizing the Space of Adversarial Examples in Machine Learning (University of Toronto)
  • Characterizing the Space of Adversarial Examples in Machine Learning (EPFL)
  • Characterizing the Space of Adversarial Examples in Machine Learning (University of Southern California)
  • Characterizing the Space of Adversarial Examples in Machine Learning (University of Michigan)
  • Characterizing the Space of Adversarial Examples in Machine Learning (Max Planck Institute for Software Systems)
  • Characterizing the Space of Adversarial Examples in Machine Learning (Columbia University)
  • Characterizing the Space of Adversarial Examples in Machine Learning (University of Virginia)
  • Characterizing the Space of Adversarial Examples in Machine Learning (Intel Labs)
  • Characterizing the Space of Adversarial Examples in Machine Learning (McGill University)
  • Characterizing the Space of Adversarial Examples in Machine Learning (University of Florida)
  • Security and Privacy in Machine Learning (Age of AI Conference)
  • Security and Privacy in Machine Learning (Bar Ilan University)
  • Security and Privacy in Machine Learning (IVADO)
  • Security and Privacy in Machine Learning (Ecole Polytechnique Montreal)
  • Security and Privacy in Machine Learning (Element AI)
  • Security and Privacy in Machine Learning (INRIA Data Institute) tutorial
2017
  • Security and Privacy in Machine Learning (IEEE WIFS 2017) tutorial
  • Lecture on Security and Privacy in Machine Learning (Prof. Trent Jaeger's computer security class, Penn State) lecture
  • Adversarial Machine Learning with CleverHans (ODSC West, joint tutorial with Nicholas Carlini) tutorial
  • Security and Privacy in Machine Learning (Georgian Partners annual summit)
  • Private Machine Learning with PATE (With the Best online conference)
  • Gradient Masking in Machine Learning (Adversarial Machine Learning Workshop, Stanford University)
  • Security and Privacy in Machine Learning (Ecole Centrale de Lyon)
  • Security and Privacy in Machine Learning (Oxford University)
  • Adversarial Machine Learning with CleverHans (ICML workshop on Reproducibility in ML) tutorial
  • Adversarial Examples in Machine Learning (AI with the Best, jointly with Patrick McDaniel)
  • Security and Privacy in Machine Learning (Deep Learning Summit Singapore)
  • Security and Privacy in Machine Learning (Microsoft Research Cambridge)
  • Security and Privacy in Machine Learning (University of Cambridge)
  • Adversarial Examples in Machine Learning (Stanford AI Salon, joint invitation with Ian Goodfellow) panel
  • Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (Stanford)
  • Adversarial Machine Learning (Data Mining for Cyber Security meetup)
  • Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (Symantec)
  • Adversarial Examples in Machine Learning (Usenix Enigma 2017)
  • Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (LeapYear)
  • Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (Immuta)
  • Machine Learning and Security (NSF 2017 Secure and Trustworth Cyberspace PIs Meeting) panel
2016
  • Security and Privacy in Machine Learning (Ecole Centrale de Lyon)
  • Examples in Machine Learning (LinkedIn)
  • Adversarial Examples in Machine Learning (Stanford)
  • Adversarial Examples in Machine Learning (Berkeley)
  • Adversarial Examples in Machine Learning (joint talk with Ian Goodfellow at AutoSens, Brussels)
  • What role will AI play in the future of autonomous vehicles and ADAS? (AutoSens 2016) panel
  • Adversarial Examples in Machine Learning (Google)
2018
  • Outstanding Research Assistant Award (The Pennsylvania State University)
  • Student Travel Award (6th International Conference on Learning Representations)
2017
  • Student Travel Award (34th International Conference on Machine Learning)
  • Student Travel Award (5th International Conference on Learning Representations)
  • Best Paper Award (5th International Conference on Learning Representations)
2016
2015
  • Microsoft CyberSpace 2025 Essay Contest - 2nd place
2010
  • Scholarship for Exceptional Academic Achievements (McGill) [declined]
Program committee member

Conferences: ACSAC (2018), AsiaCCS (2018), CCS (2018), ICML (2018), GameSec (2018), NDSS (2018), NIPS (2018), PETS (2018)

Workshops: Deep Learning and Security at IEEE Security & Privacy (2018), Privacy and Security at CVPR (2018), Dependable and Secure Machine Learning colocated with DNS (2018)

Organizing committee
  • NIPS competition on adversarial ML (2018)
  • NIPS workshop on Secure ML (2017)
  • Self-Organizing Conference on Machine Learning "SOCML" (2017)
  • With the Best online conference on Cybersecurity (2017)
Reviewer

Conferences: ACM WiSec (2016), DIMVA (2016), ICML (2017), IEEE Security & Privacy "Oakland" (2017, 2018), NIPS (2017), USENIX Security (2018)

Journals: IEEE Transactions on Dependable and Secure Computing (2017), IEEE Transactions on Information Forensics and Security (2017), IEEE Pervasive special issue on "Securing the IoT" (2017), Journal of Computer Security (2018)

Funding: Agence Nationale de la Recherche (2017), AI Xprize (2017-), Google Faculty Research Awards (2017)

Other: IEEE Security & Privacy Magazine

Invited participant
  • "When Humans Attack" workshop at the Data & Society Research Institute (2018)
  • ARO/IARPA Workshop on Adversarial Machine Learning, University of Maryland (2018)
  • ARO Workshop on Adversarial Machine Learning, Stanford (2017)
  • DARPA Workshop on Safe Machine Learning, Simons Institute (2017)
2018
2017
2016