Socially Responsible Machine Learning
Date: July 24, 2021
Location: Virtual Only (co-located with ICML 2021)
Abstract—Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems (e.g., automated resume screening and pretrial release tool) to safety-critical tasks (e.g., financial analytics and autonomous driving). While the hope is to improve decision-making accuracy and societal outcomes with these ML models, concerns have been incurred that they can inflict harm if not developed or used with care. It has been well-documented that ML models can:
- Inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups.
- Be vulnerable to security and privacy attacks that deceive the models and leak the training data's sensitive information.
- Make hard-to-justify predictions with a lack of transparency.
This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). This workshop will focus on recent research and future directions for socially responsible machien learning problems in real-world machine learning systems. We aim to bring together experts from different communities in an attempt to highlight recent work in this area as well as to clarify the foundations of socially responsible machine learning.
The tentative schedule is subject to change prior to the workshop.
- Workshop paper submission deadline: 06/10/2021
- Notification to authors: 07/10/2021
- Camera ready deadline: 07/15/2021
Call For Papers
Submission deadline: June 10, 2021 Anywhere on Earth (AoE)
Notification sent to authors: July 10, 2021 Anywhere on Earth (AoE)
Submission server: https://cmt3.research.microsoft.com/ICMLSRML2021/
The workshop will include contributed papers. The workshop will be completely virtual. We will update the details later.
We invite submissions on any aspect of machine learning that relates to fairness, ethics, transparency, interpretability, security, and privacy. This includes, but is not limited to:
- The intersection between various pillars of trust: fairness, transparency, interpretability, privacy, robustness.
- The state-of-the-art research of trustworthy ML in applications
- The possibility of adopting the most recent theory to inform practice guidelines for deploying trustworthy ML systems.
- Providing insights about how we can automatically detect, verify, explain, and mitigate potential biases or privacy problems in existing models.
- Understanding the tradeoffs or costs of achieving different goals in reality.
- Explaining the social impacts of the machine learning bias.
Submission Format: We welcome submissions up to 4 pages in ICML Proceedings format (double-blind), excluding references and appendix. Style files and an example paper are available. We allow an unlimited number of pages for references and supplementary material, but reviewers are not required to review the supplementary material. Unless indicated by the authors, we will provide PDFs of all accepted papers on https://icmlsrml2021.github.io. There will be no archival proceedings. We are using CMT3 to manage submissions.
Senior Organizing Committee