ECE1784H/CSC2559H: Trustworthy Machine Learning

Overview

Description

The deployment of machine learning in real-world systems calls for a set of complementary technologies that will ensure that machine learning is trustworthy. Here, the notion of trust is used in its broad meaning: the course covers different topics in emerging research areas related to the broader study of security and privacy in machine learning. Students will learn about attacks against computer systems leveraging machine learning, as well as defense techniques to mitigate such attacks.

The course assumes students already have a basic understanding of machine learning. Students will familiarize themselves with the emerging body of literature from different research communities investigating these questions. The class is designed to help students explore new research directions and applications. Most of the course readings will come from both seminal and recent papers in the field. No textbooks are required for this class. Notes and slides, as well as research papers, will make up the material used in this course. Links to these will be provided in the schedule below.

How does the course work?

In the following, the notation d is used to refer to the day of the lecture (a Tuesday) During a typical lecture, time will be allocated as follows:

Schedule and material

Below is the calendar for this semester course. This is the preliminary schedule, which will be altered as the semester progresses. I will attempt to announce any change to the class, but this webpage should be viewed as authoritative. If you have any questions, please contact me.

# Date Topic Slides Reading / Assignment
1 Sep 14 Overview & motivation Reading:
  1. Saltzer and Schroeder, The Protection of Information in Computer Systems.
2 Sep 21 Poisoning Reading:
  1. Rubinstein et al., ANTIDOTE: Understanding and Defending against Poisoning of Anomaly Detectors.
  2. Jagielski et al., Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning.
  3. Diakonikolas et al., Sever: A Robust Meta-Algorithm for Stochastic Optimization.
3 Sep 28 Adversarial examples Reading:
  1. Szegedy et al., Intriguing properties of neural networks.
  2. Papernot et al., Practical Black-Box Attacks against Machine Learning.
  3. Cohen et al., Certified Adversarial Robustness via Randomized Smoothing.
4 Oct 5 Availability Reading:
  1. Rakin et al., Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search.
  2. Shumailov et al., Sponge Examples: Energy-Latency Attacks on Neural Networks.
  3. Shumailov et al., Manipulating SGD with Data Ordering Attacks.
- Oct 8 Research project problem statement due
5 Oct 12 Model stealing Reading:
  1. Tramer et al., Stealing Machine Learning Models via Prediction APIs.
  2. Jia et al., Entangled Watermarks as a Defense against Model Extraction.
  3. Maini et al., Dataset Inference: Ownership Resolution in Machine Learning.
6 Oct 19 Verification in ML Reading:
  1. Ohrimenko et al., Oblivious Multi-Party Machine Learning on Trusted Processors.
  2. Juvekar et al., GAZELLE: A Low Latency Framework for Secure Neural Network Inference.
  3. Jia et al., Proof-of-Learning: Definitions and Practice.
7 Oct 26 Presentation of problem statement for research project
8 Nov 2 Data privacy Reading:
  1. Narayanan and Shmatikov, Robust De-anonymization of Large Sparse Datasets.
  2. Abadi et al., Deep Learning with Differential Privacy.
  3. Choquette-Choo et al., Label-Only Membership Inference Attacks.
Nov 9 Reading Week
9 Nov 16 Unlearning Reading:
  1. Song and Shmatikov, Overlearning Reveals Sensitive Attributes.
  2. Bourtoule et al., Machine Unlearning.
  3. Gupta et al., Adaptive Machine Unlearning.
10 Nov 23 Fairness Reading:
  1. Dwork et al., Fairness Through Awareness.
  2. Zemel et al., Learning Fair Representations.
  3. Hardt et al., Equality of Opportunity in Supervised Learning.
11 Nov 30 Interpretability Reading:
  1. Zhang et al., Understanding deep learning requires rethinking generalization.
  2. Koh and Liang, Understanding Black-box Predictions via Influence Functions.
  3. Rudin, Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.
12 Dec 07 Research project poster session

Deliverables

Paper presentation: starting from week 2, a team of students will present the papers assigned for reading each week. To find out when you are presenting, go here. The presenting team may choose an appropriate format (e.g., include interactive demos or code tutorials, ...) for this presentation with the only requirements being that the presentation should (a) introduce the weekly theme in 10mn and (b) cover all papers assigned in 40mn. All presentations should be prepared in a Google Slides to enable easy commenting by the entire class. A link to the draft of the presentation should be submitted as a note to the instructors through Piazza 14 days before your lecture time. The teaching team will then go through the presentation and iterate with the presenter team. One week (7 days) before the lecture, the slide deck will be released to the entire class, which will comment on it as they read the papers (see below for more details). During the paper presentation, the instructor will use the following rubric to grade presentations. Importantly, presenters will be graded individually on their ability to engage the class for discussion. After the presentations, the presenter team will have 3 days to finalize the slide deck to include any remaining comments from the class during the lecture before the presentation's final grade is assigned.

Weekly slide deck commenting: each week, each student is expected to submit at least one comment on the slide deck prepared by the presenter team. After submitting your comment, please submit the link to the comment to Quercus to ensure you receive the corresponding participation points. Of course, you are encourage to engage more broadly with the presenter team by leaving more than one comment. This is what makes this course interesting. A link to your comment should be submitted to Quercus before midnight on the Monday preceeding the lecture.

Weekly reading questions: each week, each student is expected to submit one question for each of the 3 papers assigned for reading. These questions will be used to bootstrap the discussion after the presentation. Your questions should be submitted to Quercus before midnight on the Monday preceeding the lecture. A rubric offering guidance on how to ask informative questions can be found here.

Research projects: Students will work on a course-long research project. Each project will be presented in the form of a poster on Dec 7. In addition the poster, note that a problem statement is due October 8. Students are encouraged to consult the teaching team regularly to ensure progress is made throughout the semester. A rubric offering guidance on how the teaching team will evaluate research projects can be found here.

Grading

Grading scheme: 15% weekly reading questions, 20% participation (slide deck commenting and in class discussion), 30% paper presentation, 35% research project.

Class participation: Course lectures will be driven by the contents of assigned papers. However, students are going to be required to (i) turn in 3 questions (1/paper) each week, (ii) participate in discussions of the paper content during each class, and (iii) comment on the presentation slide deck. Hence, the students' ability to exhibit comprehension of papers is essential to a passing grade.

Lateness policy: Slide deck commenting and questions submissions assigned each week will not be accepted late (students will be assigned a 0 for that week). All other assignments (i.e., presentation slides and project reports) will be assessed a 10% per-day late penalty, up to a maximum of 2 days (because the class depends on presentations being submitted on time to have a good experience during the lecture). Students with legitimate reasons who contact the professor before the deadline may apply for an extension.

Integrity: Any instance of sharing or plagiarism, copying, cheating, or other disallowed behavior will constitute a breach of ethics. Students are responsible for reporting any violation of these rules by other students, and failure to constitutes an ethical violation that carries with it similar penalties.

Ethics statement

This course covers topics in personal and public privacy and security. As part of this investigation we will explore technologies whose abuse may infringe on the rights of others. As an instructor, I rely on the ethical use of these technologies. Unethical use may include circumvention of existing security or privacy measurements for any purpose, or the dissemination, promotion, or exploitation of vulnerabilities of these services. Exceptions to these guidelines may occur in the process of reporting vulnerabilities through public and authoritative channels. Any activity outside the letter or spirit of these guidelines will be reported to the proper authorities and may result in dismissal from the class. When in doubt, please contact the course professor for advice. Do not undertake any action which could be perceived as technology misuse anywhere and/or under any circumstances unless you have received explicit permission from the instructor.

Land Acknowledgement

We wish to acknowledge this land on which the University of Toronto operates. For thousands of years it has been the traditional land of the Huron-Wendat, the Seneca, and most recently, the Mississaugas of the Credit River. Today, this meeting place is still the home to many Indigenous people from across Turtle Island and we are grateful to have the opportunity to work on this land.