I am an Assistant Professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Toronto. I am also a faculty member at the Vector Institute where I hold a Canada CIFAR AI Chair.
My research interests are at the intersection of security, privacy, and machine learning. If you would like to learn more about my research, I recommend reading the blog posts I co-authored on cleverhans.io, for example about differentially private ML or adversarial examples.
I earned my Ph.D. in Computer Science and Engineering at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, I spent a year at Google Brain in Úlfar Erlingsson's group.
Email: [email protected]
Office: Pratt 484E
Office hours: Zoom meeting room (only on Wednesdays between 1.30-2.30pm until April 8, 2020)
Mail/Packages: 10 King's College Road, Room SFB540, Toronto, ON M5S 3G4, Canada
I am chairing a workshop at ICLR 2020 on Trustworthy ML, consider submitting your work. This a workshop co-organized with Carmela Troncoso, Florian Tramer (co-chair), Nicholas Carlini, and Shibani Santurkar.
- If you are interested in joining my research group as a graduate student, please fill the following form and apply to the CS or ECE (select "software systems" field) program. Unfortunately, I cannot respond to all prospective graduate students but filling the form ensures that I keep a record of your application.
- If you are interested in joining my research group as a postdoc, please send me an email directly with your CV and research statement.
Here is a list of talks I will be giving. Feel free to reach out if you will be attending one of these events and would like to meet.
- 6/2020 - RAISA3 at the European Conference on AI keynote
- 3/2020 - Carnegie Mellon University lecture
A complete list of talks I previously gave is available in my CV.
- Yunxiang Zhang (Research Intern, started Winter 2020)
- Mingyue Yang (PhD student, started Winter 2020, co-supervised with David Lie)
- Saina Asani (Research Assistant, started Winter 2020)
- Christopher Choquette Choo (Engineering Science Thesis, started Fall 2019)
- Nick Jia (Engineering Science Thesis, started Fall 2019)
- Baiwu Zhang (MEng, started Fall 2019)
- Lucas Bourtoule (MASc, started Fall 2019)
- Adelin Travers (PhD, started Fall 2019, co-supervised with David Lie)
- Laura Zhukas (Undergraduate Student Researcher, Fall 2019)
- Varun Chandrasekaran (visiting PhD student, Fall 2019)
- Hadi Abdullah (Google intern, Summer 2019, co-hosted with Damien Octeau)
- Matthew Jagielski (Google Brain intern, Summer 2019)
A complete list of publications is available in my CV.
- On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping. Sanghyun Hong, Varun Chandrasekaran, Yigitcan Kaya, Tudor Dumitras, Nicolas Papernot. preprint
- Entangled Watermarks as a Defense against Model Extraction . Hengrui Jia, Christopher A. Choquette-Choo, Nicolas Papernot. preprint
- On the Robustness of Cooperative Multi-Agent Reinforcement Learning. Jieyu Lin, Kristina Dzeparoska, Sai Qian Zhang, Alberto Leon-Garcia, Nicolas Papernot. Proceedings of the 3rd Deep Learning and Security workshop colocated with the 41st IEEE Symposium on Security and Privacy. workshop
- Machine Unlearning. Lucas Bourtoule, Varun Chandrasekaran, Christopher Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot. preprint
- Making the Shoe Fit: Architectures, Initializations, and Tuning for Learning with Privacy. Nicolas Papernot, Steve Chien, Shuang Song, Abhradeep Thakurta, Ulfar Erlingsson. preprint
- Thieves of Sesame Street: Model Extraction on BERT-based APIs. Kalpesh Krishna, Gaurav Singh Tomar, Ankur P. Parikh, Nicolas Papernot, Mohit Iyyer. Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia. conference
- High Accuracy and High Fidelity Extraction of Neural Networks. Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot. Proceedings of the 29th USENIX Security Symposium. Boston, MA. conference
- How Relevant Is the Turing Test in the Age of Sophisbots?. Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot. IEEE Security and Privacy Magazine. invited
- MixMatch: A Holistic Approach to Semi-Supervised Learning. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel. Proceedings of the 33rd Conference on Neural Information Processing Systems, Vancouver, Canada. conference
- Analyzing and Improving Representations with the Soft Nearest Neighbor Loss. Nicholas Frosst, Nicolas Papernot, Geoffrey Hinton. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA. conference
- Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning. Nicolas Papernot and Patrick McDaniel. preprint
- Scalable Private Learning with PATE. Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson. Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. conference
- Ensemble Adversarial Training: Attacks and Defenses. Florian Tramer, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel. Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. conference
- Towards the Science of Security and Privacy in Machine Learning. Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. Proceedings of the 3rd IEEE European Symposium on Security and Privacy, London, UK. conference
- Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data. Nicolas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Proceedings of the 5th International Conference on Learning Representations, Toulon, France. best paper
- Practical Black-Box Attacks against Machine Learning. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z.Berkay Celik, and Ananthram Swami. Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security, Abu Dhabi, UAE. conference
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. technical report
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Proceedings of the 37th IEEE Symposium on Security and Privacy, San Jose, CA. conference
- The Limitations of Deep Learning in Adversarial Settings. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. Proceedings of the 1st IEEE European Symposium on Security and Privacy, Saarbrucken, Germany. conference
Recorded Talks and Blog Posts
These resources are a good overview of my research interests. The following three videos are (left) a lecture I gave in the Spring 2019 on security and privacy in machine learning, (middle) an oral I gave on PATE at ICLR 2017, and (right) a talk that highlights our early work on security in machine learning.
Here is a list of blog posts discussing some of the research questions I'm interested in:
- The academic job search for computer scientists in 10 questions
- How to know when machine learning does not know
- Machine Learning with Differential Privacy in TensorFlow
- Privacy and machine learning: two unexpected allies?
- The challenge of verification and testing of machine learning
- Is attacking machine learning easier than defending it?
- Breaking things is easy