ECE1784H/CSC2559H: Trustworthy Machine Learning

Overview

Description

The deployment of machine learning in real-world systems calls for a set of complementary technologies that will ensure that machine learning is trustworthy. Here, the notion of trust is used in its broad meaning: the course covers different topics in emerging research areas related to the broader study of security and privacy in machine learning. Students will learn about attacks against computer systems leveraging machine learning, as well as defense techniques to mitigate such attacks.

The course assumes students already have a basic understanding of machine learning. Students will familiarize themselves with the emerging body of literature from different research communities investigating these questions. The class is designed to help students explore new research directions and applications. Most of the course readings will come from both seminal and recent papers in the field. No textbooks are required for this class. Notes and slides, as well as research papers, will make up the material used in this course. Links to these will be provided in the schedule below.

How does the course work?

In the following, the notation d is used to refer to the day of the lecture (a Tuesday) During a typical lecture, time will be allocated as follows:

Schedule and material

Below is the calendar for this semester course. This is the preliminary schedule, which will be altered as the semester progresses. I will attempt to announce any change to the class, but this webpage should be viewed as authoritative. If you have any questions, please contact me.

# Date Topic Slides Reading / Assignment
1 Sep 13 Overview & motivation slides Reading:
  1. Saltzer and Schroeder, The Protection of Information in Computer Systems.
2 Sep 20 Data privacy TBA Reading:
  1. Narayanan and Shmatikov, Robust De-anonymization of Large Sparse Datasets.
  2. Abadi et al., Deep Learning with Differential Privacy.
  3. Choquette-Choo et al., Label-Only Membership Inference Attacks.
3 Sep 27 Unlearning TBA Reading:
  1. Song and Shmatikov, Overlearning Reveals Sensitive Attributes.
  2. Bourtoule et al., Machine Unlearning.
  3. Thudi et al., On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning.
4 Oct 4 Distribution shifts and uncertainty TBA Reading:
  1. Rabanser et al., Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift.
  2. Minderer et al., Revisiting the Calibration of Modern Neural Networks.
  3. Ziyin et al., Deep Gamblers: Learning to Abstain with Portfolio Theory.
- Oct 11 Research project problem statement due by beginning of class
5 Oct 11 Model stealing TBA Reading:
  1. Tramer et al., Stealing Machine Learning Models via Prediction APIs.
  2. Jia et al., Entangled Watermarks as a Defense against Model Extraction.
  3. Maini et al., Dataset Inference: Ownership Resolution in Machine Learning.
6 Oct 18 Adversarial examples TBA Reading:
  1. Szegedy et al., Intriguing properties of neural networks.
  2. Papernot et al., Practical Black-Box Attacks against Machine Learning.
  3. Cohen et al., Certified Adversarial Robustness via Randomized Smoothing.
7 Oct 25 Presentation of problem statement for research project
8 Nov 1 Availability TBA Reading:
  1. Rakin et al., Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search.
  2. Shumailov et al., Sponge Examples: Energy-Latency Attacks on Neural Networks.
  3. Shumailov et al., Manipulating SGD with Data Ordering Attacks.
Nov 8 Reading Week
9 Nov 15 Verification in ML TBA Reading:
  1. Ohrimenko et al., Oblivious Multi-Party Machine Learning on Trusted Processors.
  2. Juvekar et al., GAZELLE: A Low Latency Framework for Secure Neural Network Inference.
  3. Jia et al., Proof-of-Learning: Definitions and Practice.
10 Nov 22 Fairness TBA Reading:
  1. Dwork et al., Fairness Through Awareness.
  2. Zemel et al., Learning Fair Representations.
  3. Hardt et al., Equality of Opportunity in Supervised Learning.
11 Nov 29 Interpretability TBA Reading:
  1. Zhang et al., Understanding deep learning requires rethinking generalization.
  2. Koh and Liang, Understanding Black-box Predictions via Influence Functions.
  3. Rudin, Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.
12 Dec 06 Research project poster session

Deliverables

Paper presentation: starting from week 2, a team of students will present the papers assigned for reading each week. To find out when you are presenting, go here. The presenting team may choose an appropriate format (e.g., include interactive demos or code tutorials, ...) for this presentation with the only requirements being that the presentation should (a) introduce the weekly theme in 10mn and (b) cover all papers assigned in 75mn. All presentations should be prepared in a Google Slides to enable easy commenting by the entire class. A link to the draft of the presentation should be submitted as a note to the instructors through Piazza 14 days before your lecture time. The teaching team will then go through the presentation and iterate with the presenter team. One week (7 days) before the lecture, the slide deck will be released to the entire class, which will comment on it as they read the papers (see below for more details). During the paper presentation, the instructor will use the following rubric to grade presentations. Importantly, presenters will be graded individually on their ability to engage the class for discussion. After the presentations, the presenter team will have 3 days to finalize the slide deck to include any remaining comments from the class during the lecture before the presentation's final grade is assigned.

Weekly slide deck commenting: each week, each student is expected to submit at least one comment on the slide deck prepared by the presenter team. After submitting your comment, please submit the link to the comment to Quercus to ensure you receive the corresponding participation points. Of course, you are encourage to engage more broadly with the presenter team by leaving more than one comment. This is what makes this course interesting. A link to your comment should be submitted to Quercus before 5pm on the Friday preceeding the lecture.

Weekly reading questions: each week, each student is expected to submit one question for each of the 3 papers assigned for reading. These questions will be used to bootstrap the discussion after the presentation. Your questions should be submitted to Quercus before 5pm on the Friday preceeding the lecture. A rubric offering guidance on how to ask informative questions can be found here.

Research projects: Students will work on a course-long research project. Each project will be presented in the form of a poster on Dec 6. In addition the poster, note that a problem statement is due October 11 by begining of class. Students are encouraged to consult the teaching team regularly to ensure progress is made throughout the semester. A rubric offering guidance on how the teaching team will evaluate research projects can be found here.

Grading

Grading scheme: 15% weekly reading questions, 20% participation (slide deck commenting and in class discussion), 30% paper presentation, 35% research project.

Class participation: Course lectures will be driven by the contents of assigned papers. However, students are going to be required to (i) turn in 3 questions (1/paper) each week, (ii) participate in discussions of the paper content during each class, and (iii) comment on the presentation slide deck. Hence, the students' ability to exhibit comprehension of papers is essential to a passing grade.

Lateness policy: Slide deck commenting and questions submissions assigned each week will not be accepted late (students will be assigned a 0 for that week). All other assignments (i.e., presentation slides and project reports) will be assessed a 10% per-day late penalty, up to a maximum of 2 days (because the class depends on presentations being submitted on time to have a good experience during the lecture). Students with legitimate reasons who contact the professor before the deadline may apply for an extension.

Integrity: Any instance of sharing or plagiarism, copying, cheating, or other disallowed behavior will constitute a breach of ethics. Students are responsible for reporting any violation of these rules by other students, and failure to constitutes an ethical violation that carries with it similar penalties.

Ethics statement

This course covers topics in personal and public privacy and security. As part of this investigation we will explore technologies whose abuse may infringe on the rights of others. As an instructor, I rely on the ethical use of these technologies. Unethical use may include circumvention of existing security or privacy measurements for any purpose, or the dissemination, promotion, or exploitation of vulnerabilities of these services. Exceptions to these guidelines may occur in the process of reporting vulnerabilities through public and authoritative channels. Any activity outside the letter or spirit of these guidelines will be reported to the proper authorities and may result in dismissal from the class. When in doubt, please contact the course professor for advice. Do not undertake any action which could be perceived as technology misuse anywhere and/or under any circumstances unless you have received explicit permission from the instructor.

Land Acknowledgement

We wish to acknowledge this land on which the University of Toronto operates. For thousands of years it has been the traditional land of the Huron-Wendat, the Seneca, and most recently, the Mississaugas of the Credit River. Today, this meeting place is still the home to many Indigenous people from across Turtle Island and we are grateful to have the opportunity to work on this land.