I am an Assistant Professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Toronto. I am also a faculty member at the Vector Institute where I hold a Canada CIFAR AI Chair, and a faculty affiliate at the Schwartz Reisman Institute.
My research interests are at the intersection of security, privacy, and machine learning. If you would like to learn more about my research, I recommend reading the blog posts I co-authored on cleverhans.io, for example about machine unlearning, differentially private ML, or adversarial examples.
I earned my Ph.D. in Computer Science and Engineering at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, I spent a year at Google Brain in Úlfar Erlingsson's group.
Email: [email protected]
Office: Pratt 484E
Office hours: will resume in September 2020
Mail/Packages: 10 King's College Road, Room SFB540, Toronto, ON M5S 3G4, Canada
Information for prospective graduate students and postdocs
- If you are interested in joining my research group as a graduate student, apply to the CS or ECE (select "software systems" field) program. Unfortunately, I cannot respond to all prospective graduate students, so the best time is to contact me after you submitted your application.
- If you are interested in joining my research group as a postdoc, please send me an email directly with your CV and research statement.
Here is a list of talks I will be giving. Feel free to reach out if you will be attending one of these events and would like to meet.
- 10/2021 - Techna 2021 Annual Symposium
- 2/2021 - Second AAAI Workshop of Privacy Preserving Artificial Intelligence
- 1/2021 - MIT
- 1/2021 - Schwartz Reisman Institute
- 11/2020 - EVOKE CASCON 2020 keynote
Recordings can be found below the selected publications list on this current page. A complete list of talks I previously gave is available in my CV.
Current students and postdocs
- Anvith Thudi (Engineering Science student, started Fall 2020)
- Adam Dziedzic (Postdoctoral Fellow, started Fall 2020)
- Mohammad Yaghini (PhD student, started Fall 2020)
- Natalie Dullerud (MS student, started Fall 2020)
- Stephan Rabanser (PhD student, started Fall 2020)
- Jonas Guan (PhD student, started Fall 2020)
- Jiaqi Wang (MASc student, started Fall 2020, co-advised with David Lie)
- Nick Jia (MASc student, started Fall 2020)
- Steven Xia (Undergraduate student, Fall 2020 - Summer 2021)
- Jin Zhou (Engineering Science student, Fall 2020 - Summer 2021)
- Lucy Lu (Engineering Science student, Fall 2020 - Summer 2021)
- Marko Huang (Engineering Science student, Fall 2020 - Summer 2021)
- Gabriel Deza (Engineering Science student, Fall 2020 - Summer 2021)
- Tejumade Afonja (Research Intern, Summer 2020)
- Ilia Shumailov (Visiting PhD student, Summer 2020)
- Mingyue Yang (PhD student, started Winter 2020, co-advised with David Lie)
- Vinith Suriyakumar (MS student, started Fall 2019, co-advised with Marzyeh Ghassemi and Anna Goldenberg)
- Lucas Bourtoule (MASc student, started Fall 2019)
- Adelin Travers (PhD student, started Fall 2019, co-advised with David Lie)
Past students and postdocs
- Milad Nasr (Google Brain Intern, Summer 2020, co-hosted with Nicholas Carlini)
- Gabriel Deza (Research Intern, Summer 2020)
- Lorna Licollari (Research Intern, Summer 2020)
- Pratyush Maini (Research Intern, Summer 2020)
- Yunxiang Zhang (Research Intern, Spring 2020)
- Saina Asani (Research Assistant, Winter 2020 - Summer 2020)
- Laura Zhukas (Undergraduate Student Researcher, Fall 2019)
- Christopher Choquette-Choo (Engineering Science student, Fall 2019 - Summer 2020)
- Nick Jia (Engineering Science student, Fall 2019 - Summer 2020)
- Baiwu Zhang (MEng student, Fall 2019 - Summer 2020)
- Varun Chandrasekaran (Visiting PhD student, Fall 2019)
- Hadi Abdullah (Google Intern, Summer 2019, co-hosted with Damien Octeau)
- Matthew Jagielski (Google Brain intern, Summer 2019)
A complete list of publications is available in my CV.
- Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators on Multi-Tenant FPGAs. Andrew Boutros, Mathew Hall, Nicolas Papernot, Vaughn Betz. Proceedings of the 2020 International Conference on Field-Programmable Technology. conference
- Tempered Sigmoids for Deep Learning with Differential Privacy. Nicolas Papernot, Abhradeep Thakurta, Shuang Song, Steve Chien, Ulfar Erlingsson. Theory and Practice of Differential Privacy. workshop
- Sponge Examples: Energy-Latency Attacks on Neural Networks. Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, Ross Anderson. preprint
- Entangled Watermarks as a Defense against Model Extraction . Hengrui Jia, Christopher A. Choquette-Choo, Nicolas Papernot. preprint
- Machine Unlearning. Lucas Bourtoule, Varun Chandrasekaran, Christopher Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot. Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA. conference
- SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems. Hadi Abdullah, Kevin Warren, Vincent Bindschaedler, Nicolas Papernot, Patrick Traynor. Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA. conference
- Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. Florian Tramer, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jorn-Henrik Jacobsen. Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria. conference
- On the Robustness of Cooperative Multi-Agent Reinforcement Learning. Jieyu Lin, Kristina Dzeparoska, Sai Qian Zhang, Alberto Leon-Garcia, Nicolas Papernot. Proceedings of the 3rd Deep Learning and Security workshop colocated with the 41st IEEE Symposium on Security and Privacy. workshop
- Thieves of Sesame Street: Model Extraction on BERT-based APIs. Kalpesh Krishna, Gaurav Singh Tomar, Ankur P. Parikh, Nicolas Papernot, Mohit Iyyer. Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia. conference
- High Accuracy and High Fidelity Extraction of Neural Networks. Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot. Proceedings of the 29th USENIX Security Symposium. Boston, MA. conference
- How Relevant Is the Turing Test in the Age of Sophisbots?. Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot. IEEE Security and Privacy Magazine. invited
- MixMatch: A Holistic Approach to Semi-Supervised Learning. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel. Proceedings of the 33rd Conference on Neural Information Processing Systems, Vancouver, Canada. conference
- Analyzing and Improving Representations with the Soft Nearest Neighbor Loss. Nicholas Frosst, Nicolas Papernot, Geoffrey Hinton. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA. conference
- Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning. Nicolas Papernot and Patrick McDaniel. technical report
- Scalable Private Learning with PATE. Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson. Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. conference
- Ensemble Adversarial Training: Attacks and Defenses. Florian Tramer, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel. Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. conference
- Towards the Science of Security and Privacy in Machine Learning. Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. Proceedings of the 3rd IEEE European Symposium on Security and Privacy, London, UK. conference
- Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data. Nicolas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Proceedings of the 5th International Conference on Learning Representations, Toulon, France. best paper
- Practical Black-Box Attacks against Machine Learning. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z.Berkay Celik, and Ananthram Swami. Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security, Abu Dhabi, UAE. conference
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. technical report
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Proceedings of the 37th IEEE Symposium on Security and Privacy, San Jose, CA. conference
- The Limitations of Deep Learning in Adversarial Settings. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. Proceedings of the 1st IEEE European Symposium on Security and Privacy, Saarbrucken, Germany. conference
Reza Shokri and I put together a list of publications on trustworthy ML here. We selected different sub-topics and key related research papers (as starting points) to help a student learn about this research area. There are so many good papers which are being published in this domain, so this list is by no means comprehensive. Papers are selected with the intention of maximizing coverage of the techniques introduced in the literature in as few papers as possible.
These video resources are a good overview of my research interests.
Here is a list of blog posts discussing some of the research questions I'm interested in:
- Teaching Machines to Unlearn
- In Model Extraction, Don’t Just Ask How?: Ask Why?
- How to steal modern NLP systems with gibberish?
- The academic job search for computer scientists in 10 questions
- How to know when machine learning does not know
- Machine Learning with Differential Privacy in TensorFlow
- Privacy and machine learning: two unexpected allies?
- The challenge of verification and testing of machine learning
- Is attacking machine learning easier than defending it?
- Breaking things is easy
- [Fall 2020] ECE421H: Introduction to Machine Learning (see Quercus for course details)
- [Fall 2020] ECE1513H: Introduction to Machine Learning (see Quercus for course details)
- [Winter 2020] ECE1513H: Introduction to Machine Learning (see Quercus for course details)
- [Fall 2019] ECE1784H: Trustworthy Machine Learning