UWisconsin CS 763: Security and Privacy in Data Science (Previously CS 839: Topics in Security and Privacy)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

6.5 KiB

Calendar (tentative)

Date Topic Presenters Summarizers Notes

Differential Privacy

9/4 Course welcome
Reading: How to Read a Paper
JH -
9/6 Basic private mechanisms
Reading: AFDP 3.2-4
JH -
9/9 Composition and closure properties
Reading: AFDP 3.5
JH - Paper Signups
9/11 What does differential privacy actually mean?
Reading: Lunchtime for Differential Privacy
JH -
9/13 Differentially private machine learning
Reading: On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches
Reading: Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data

Adversarial Machine Learning

9/16 Overview and basic concepts JH -
9/18 Adversarial examples
Reading: Intriguing Properties of Neural Networks
Reading: Explaining and Harnessing Adversarial Examples
Reading: Robust Physical-World Attacks on Deep Learning Models
9/20 Data poisoning
Reading: Poisoning Attacks against Support Vector Machines
9/23 Defenses and detection: challenges
Reading: Towards Evaluating the Robustness of Neural Networks
Reading: Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
JH -
9/25 Certified defenses
Reading: Certified Defenses for Data Poisoning Attacks
Reading: Certified Defenses against Adversarial Examples
9/27 Adversarial training
Reading: Towards Deep Learning Models Resistant to Adversarial Attacks

Applied Cryptography

9/30 Overview and basic constructions JH -
10/2 SMC for machine learning
Reading: Secure Computation for Machine Learning With SPDZ
Reading: Helen: Maliciously Secure Coopetitive Learning for Linear Models
10/4 Secure data collection at scale
Reading: Prio: Private, Robust, and Scalable Computation of Aggregate Statistics
10/7 Verifiable computing
Reading: SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud
JH -
10/9 Side channels and implementation issues
Reading: On Significance of the Least Significant Bits For Differential Privacy
10/11 Model watermarking
Reading: Protecting Intellectual Property of Deep Neural Networks with Watermarking
Reading: Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
MS1 Due

Algorithmic Fairness

10/14 Overview and basic notions
Reading: Chapter 2 from Barocas, Hardt, and Narayanan
JH -
10/16 Individual and group fairness
Reading: Fairness through Awarness
Reading: Equality of Opportunity in Supervised Learning
10/18 Inherent tradeoffs
Reading: Inherent Trade-Offs in the Fair Determination of Risk Scores
10/21 Defining fairness: challenges
Reading: 50 Years of Test (Un)fairness: Lessons for Machine Learning
JH -
10/23 Fairness in unsupervised learning
Reading: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
Reading: Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
10/25 Beyond observational measures
Reading: Avoiding Discrimination through Causal Reasoning
Reading: Counterfactual Fairness

PL and Verification

10/28 Overview and basic notions JH -
10/30 Probabilistic programming languages
Reading: Probabilistic Programming
11/1 Automata learning and interpretability
Reading: Model Learning
Reading: Interpreting Finite Automata for Sequential Data
11/4 Programming languages for differential privacy
Reading: Programming Language Techniques for Differential Privacy
JH -
11/6 Verifying neural networks
Reading: AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
Reading: DL2: Training and Querying Neural Networks with Logic
11/8 Verifying probabilistic programs
Reading: Advances and Challenges of Probabilistic Model Checking
Reading: A Program Logic for Union Bounds
MS2 Due

No Lectures: Work on Projects

12/11 (TBD) Project Presentations