Welcome! I am an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Toronto and a Canada CIFAR AI Chair at the Vector Institute. My research interests are at the intersection of security, privacy, and machine learning. If you would like to learn more about my research, I recommend reading the blog posts I co-authored on cleverhans.io, for example about differentially private ML or adversarial examples.
I earned my Ph.D. in Computer Science and Engineering at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, I spent a year at Google Brain in Úlfar Erlingsson's group.
Email: [email protected]
Office: Pratt 484E (office hours: Wednesdays 1:30-3:30pm)
Mail/Packages: 10 King's College Road Room SFB540, Toronto, ON M5S 3G4, Canada
- If you are interested in joining my research group as a graduate student, please fill the following form and apply to the ECE or CS program. Unfortunately, I cannot respond to all prospective graduate students but filling the form ensures that I keep a record of your application.
- If you are interested in joining my research group as a postdoc, please send me an email directly with your CV and research statement.
- The Vector Institute is looking for research scientists interested in machine learning. See this post and send me an email if you are interested.
Here is a list of talks I will be giving. Feel free to reach out if you will be attending one of these events and would like to meet.
- 11/2019 - TensorFlow Roadshow Paris
- 10/2019 - Columbia University
- 10/2019 - Fields Institute
- 10/2019 - Cybersecurity AI Prague
- 10/2019 - France is AI 2019
- 10/2019 - Princeton University
- 10/2019 - University of British Columbia
- 9/2019 - IBM AI week security symposium
A complete list of talks I previously gave is available in my CV.
- Varun Chandrasekaran (visiting PhD, Fall 2019)
- Lucas Bourtoule (MASc, starting Fall 2019)
- Adelin Travers (PhD, starting Fall 2019, co-supervised with David Lie)
- Matthew Jagielski (Google Brain intern, Summer 2019)
A complete list of publications is available in my CV.
- High-Fidelity Extraction of Neural Network Models. Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot. preprint
- How Relevant Is the Turing Test in the Age of Sophisbots?. Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot. To appear in IEEE Security and Privacy Magazine. invited
- MixMatch: A Holistic Approach to Semi-Supervised Learning. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel. Proceedings of the 33rd Conference on Neural Information Processing Systems, Vancouver, Canada. conference
- Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness. Jorn-Henrik Jacobsen, Jens Behrmannn, Nicholas Carlini, Florian Tramer, Nicolas Papernot. Presented at the ICLR 2019 workshop on Safe ML, New Orleans, Louisiana. workshop
- Analyzing and Improving Representations with the Soft Nearest Neighbor Loss. Nicholas Frosst, Nicolas Papernot, Geoffrey Hinton. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA. conference
- A Marauder's Map of Security and Privacy in Machine Learning: An overview of current and future research directions for making machine learning secure and private. Nicolas Papernot. Keynote at the 11th ACM Workshop on Artificial Intelligence and Security colocated with the 25th ACM Conference on Computer and Communications Security, Toronto, Canada. invited
- Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning. Nicolas Papernot and Patrick McDaniel. preprint
- Scalable Private Learning with PATE. Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson. Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. conference
- Ensemble Adversarial Training: Attacks and Defenses. Florian Tramer, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel. Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. conference
- Towards the Science of Security and Privacy in Machine Learning. Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. Proceedings of the 3rd IEEE European Symposium on Security and Privacy, London, UK. conference
- Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data. Nicolas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Proceedings of the 5th International Conference on Learning Representations, Toulon, France. best paper
- Practical Black-Box Attacks against Machine Learning. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z.Berkay Celik, and Ananthram Swami. Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security, Abu Dhabi, UAE. conference
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. technical report
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Proceedings of the 37th IEEE Symposium on Security and Privacy, San Jose, CA. conference
- The Limitations of Deep Learning in Adversarial Settings. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. Proceedings of the 1st IEEE European Symposium on Security and Privacy, Saarbrucken, Germany. conference
Recorded Talks and Blog Posts
These resources are a good overview of my research interests. The following three videos are (left) a lecture I gave in the Spring 2019 on security and privacy in machine learning, (middle) an oral I gave on PATE at ICLR 2017, and (right) a talk that highlights our early work on security in machine learning.
Here is a list of blog posts discussing some of the research questions I'm interested in:
- The academic job search for computer scientists in 10 questions
- How to know when machine learning does not know
- Machine Learning with Differential Privacy in TensorFlow
- Privacy and machine learning: two unexpected allies?
- The challenge of verification and testing of machine learning
- Is attacking machine learning easier than defending it?
- Breaking things is easy
- [Fall 2019] ECE1784H: Trustworthy Machine Learning