Welcome! I am a research scientist at Google Brain working on the security and privacy of machine learning in Úlfar Erlingsson's group. I will join the University of Toronto and Vector Institute as an assistant professor and Canada CIFAR AI Chair in the Fall 2019. If you'd like to learn more about my research, I recommend reading the blog posts I co-authored on cleverhans.io, for example about differentially private ML or adversarial examples.
I earned my Ph.D. in Computer Science and Engineering at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship in Security. Previously, I received my M.S. and B.S. in Engineering Sciences from the Ecole Centrale de Lyon; which I attended after completing my classe préparatoire at the Lycée Louis-le-Grand.
Address: 1600 Amphitheatre Pkwy, Mountain View, CA 94043, USA
Email: [email protected]CV » Blog » Twitter » Google Scholar »
A complete list of publications is available in my CV.
- MixMatch: A Holistic Approach to Semi-Supervised Learning. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel. preprint
- Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness. Jorn-Henrik Jacobsen, Jens Behrmannn, Nicholas Carlini, Florian Tramer, Nicolas Papernot. Presented at the ICLR 2019 workshop on Safe ML, New Orleans, Louisiana. workshop
- Analyzing and Improving Representations with the Soft Nearest Neighbor Loss. Nicholas Frosst, Nicolas Papernot, Geoffrey Hinton. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA. conference
- A Marauder's Map of Security and Privacy in Machine Learning. Nicolas Papernot. Keynote at the 11th ACM Workshop on Artificial Intelligence and Security colocated with the 25th ACM Conference on Computer and Communications Security, Toronto, Canada. invited
- Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning. Nicolas Papernot and Patrick McDaniel. preprint
- Scalable Private Learning with PATE. Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson. Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. conference
- Ensemble Adversarial Training: Attacks and Defenses. Florian Tramer, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel. Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. conference
- Towards the Science of Security and Privacy in Machine Learning. Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. Proceedings of the 3rd IEEE European Symposium on Security and Privacy, London, UK. conference
- Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data. Nicolas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Proceedings of the 5th International Conference on Learning Representations, Toulon, France. best paper
- Practical Black-Box Attacks against Machine Learning. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z.Berkay Celik, and Ananthram Swami. Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security, Abu Dhabi, UAE. conference
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. technical report
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Proceedings of the 37th IEEE Symposium on Security and Privacy, San Jose, CA. conference
- The Limitations of Deep Learning in Adversarial Settings. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. Proceedings of the 1st IEEE European Symposium on Security and Privacy, Saarbrucken, Germany. conference
Here is a list of talks I will be giving. Feel free to reach out if you will be attending one of these events and would like to meet.
- Title TBD (10/2019, Columbia University)
- Title TBD (10/2019, Cybersecurity AI Prague)
- Title TBD (10/2019, France is AI 2019)
- Title TBD (10/2019, Princeton University)
- Title TBD (8/2019, Waterloo ML + Security + Verification Workshop)
- Machine Learning at Scale with Differential Privacy in TensorFlow (8/2019, USENIX PEPR 2019)
- PhD Career Paths (Academic v. Non-academic) (7/2019, Google PhD Intern Research Conference)
- PhD Career Paths (Academic v. Non-academic) (7/2019, Google PhD Fellowship Summit)
- Machine Learning Security: Adversarial Examples (7/2019, Stanford) lecture
A complete list of talks I previously gave is available in my CV.
Recorded Talks and Blog Posts
These resources are a good overview of my research interests. The following three videos are (left) a lecture I gave in the Spring 2019 on security and privacy in machine learning, (middle) an oral I gave on PATE at ICLR 2017, and (right) a talk that highlights our early work on security in machine learning.
Here is a list of blog posts discussing some of the research questions I'm interested in:
- How to know when machine learning does not know
- Machine Learning with Differential Privacy in TensorFlow
- Privacy and machine learning: two unexpected allies?
- The challenge of verification and testing of machine learning
- Is attacking machine learning easier than defending it?
- Breaking things is easy
- Varun Chandrasekaran (visiting PhD, Fall 2019)
- Lucas Bourtoule (MASc, starting Fall 2019)
- Adelin Travers (PhD, starting Fall 2019, co-supervised with David Lie)
- Matthew Jagielski (Google Brain intern, Summer 2019)
- If you are interested in joining my research group as a postdoc, please send me an email directly with your CV and research statement.
- The Vector Institute is looking for research scientists interested in working on machine learning. Research scientists at Vector can supervise fully-funded graduate students from affiliated universities, and collaborate freely with the other members of the Institute. Please see this post and send me an email if you are interested.