Socially Responsible Machine Learning
Date: July 24, 2021
Location: Virtual Only (co-located with ICML 2021)
- We've sent out the results for the papers. Check all the accepted paper.
Abstract—Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems (e.g., automated resume screening and pretrial release tool) to safety-critical tasks (e.g., financial analytics and autonomous driving). While the hope is to improve decision-making accuracy and societal outcomes with these ML models, concerns have been incurred that they can inflict harm if not developed or used with care. It has been well-documented that ML models can:
- Inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups.
- Be vulnerable to security and privacy attacks that deceive the models and leak the training data's sensitive information.
- Make hard-to-justify predictions with a lack of transparency.
This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). This workshop will focus on recent research and future directions for socially responsible machien learning problems in real-world machine learning systems. We aim to bring together experts from different communities in an attempt to highlight recent work in this area as well as to clarify the foundations of socially responsible machine learning.
The tentative schedule is subject to change prior to the workshop. Time Zone: US Eastern Time Zone (UTC-05:00)
Accepted paper can be found here.
|8:45 - 9:00||Anima Anandkumar-Opening Remarks|
|9:00 - 9:40||Invited Talk: Jun Zhu: Understand and Benchmark Adversarial Robustness of Deep Learning|
|9:40 - 10:20||Invited Talk: Olga Russakovsky: Revealing, Quantifying, Analyzing and Mitigating Bias in Visual Recognition|
|10:20 - 11:00||Invited Talk: Pin-Yu Chen: Adversarial Machine Learning for Good|
|11:10 - 11:50||Invited Talk: Tatsunori Hashimoto: Not All Uncertainty is Noise: Machine Learning with Confounders and Inherent Disagreements|
|11:50 - 12:30||Invited Talk: Nicolas Papernot: What Does it Mean for ML to be Trustworthy ?|
|13:30 - 13:50||Contributed Talk: Machine Learning API Shift Assessments: Change is Coming!|
|13:50 - 14:30||Invited Talk: Aaron Roth: Better Estimates of Prediction Uncertainty|
|14:30 - 15:10||Invited Talk: Jun-Yan Zhu: Understanding and Rewriting GANs|
|15:20 - 16:00||Invited Talk: Kai-Wei Chang: Societal Bias in Language Generation|
|16:00 - 16:40||Invited Talk: Yulia Tsvetkov: Proactive NLP: How to Prevent Social and Ethical Problems in NLP Systems?|
|16:40 - 17:00||Contributed Talk: Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions|
|17:00 - 17:20||Contributed Talk: FERMI: Fair Empirical Risk Minimization Via Exponential Rényi Mutual Information|
|17:20 - 17:40||Contributed Talk: Auditing AI models for Verified Deployment under Semantic Specifications|
|18:00 - 19:00||Poster Sessions at Gathertown. link|
Senior Organizing Committee
- Workshop paper submission deadline: 06/10/2021
- Notification to authors: 07/10/2021
- Camera ready deadline: 07/15/2021
Call For Papers
Submission deadline: June 10, 2021 Anywhere on Earth (AoE)
Notification sent to authors: July 10, 2021 Anywhere on Earth (AoE)
Submission server: https://cmt3.research.microsoft.com/ICMLSRML2021/
The workshop will include contributed papers. The workshop will be completely virtual. We will update the details later.
We invite submissions on any aspect of machine learning that relates to fairness, ethics, transparency, interpretability, security, and privacy. This includes, but is not limited to:
- The intersection between various pillars of trust: fairness, transparency, interpretability, privacy, robustness.
- The state-of-the-art research of trustworthy ML in applications
- The possibility of adopting the most recent theory to inform practice guidelines for deploying trustworthy ML systems.
- Providing insights about how we can automatically detect, verify, explain, and mitigate potential biases or privacy problems in existing models.
- Understanding the tradeoffs or costs of achieving different goals in reality.
- Explaining the social impacts of the machine learning bias.
Submission Format: We welcome submissions up to 4 pages in ICML Proceedings format (double-blind), excluding references and appendix. Style files and an example paper are available. We allow an unlimited number of pages for references and supplementary material, but reviewers are not required to review the supplementary material. Unless indicated by the authors, we will provide PDFs of all accepted papers on https://icmlsrml2021.github.io. There will be no archival proceedings. We are using CMT3 to manage submissions.