Socially Responsible Machine Learning

Date: July 24, 2021

Location: Virtual Only (co-located with ICML 2021)


Abstract—Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems (e.g., automated resume screening and pretrial release tool) to safety-critical tasks (e.g., financial analytics and autonomous driving). While the hope is to improve decision-making accuracy and societal outcomes with these ML models, concerns have been incurred that they can inflict harm if not developed or used with care. It has been well-documented that ML models can:

For example, various commercial face recognition products were shown to have racial/gender bias. In domains such as financial analytics and autonomous vehicles, ML models could be easily misled by carefully-crafted small perturbation or even natural perturbation. Therefore, it is essential to build socially responsible Machine Learning models that are fair, robust, private, transparent, and interpretable.

This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). This workshop will focus on recent research and future directions for socially responsible machien learning problems in real-world machine learning systems. We aim to bring together experts from different communities in an attempt to highlight recent work in this area as well as to clarify the foundations of socially responsible machine learning.


The tentative schedule is subject to change prior to the workshop. Time Zone: US Eastern Time Zone (UTC-05:00)

Accepted paper can be found here.

8:45 - 9:00 Anima Anandkumar-Opening Remarks
9:00 - 9:40 Invited Talk: Jun Zhu: Understand and Benchmark Adversarial Robustness of Deep Learning
9:40 - 10:20 Invited Talk: Olga Russakovsky: Revealing, Quantifying, Analyzing and Mitigating Bias in Visual Recognition
10:20 - 11:00 Invited Talk: Pin-Yu Chen: Adversarial Machine Learning for Good
Coffee Break
11:10 - 11:50 Invited Talk: Tatsunori Hashimoto: Not All Uncertainty is Noise: Machine Learning with Confounders and Inherent Disagreements
11:50 - 12:30 Invited Talk: Nicolas Papernot: What Does it Mean for ML to be Trustworthy ?
13:30 - 13:50 Contributed Talk: Machine Learning API Shift Assessments: Change is Coming!
13:50 - 14:30 Invited Talk: Aaron Roth: Better Estimates of Prediction Uncertainty
14:30 - 15:10 Invited Talk: Jun-Yan Zhu: Understanding and Rewriting GANs
15:20 - 16:00 Invited Talk: Kai-Wei Chang: Societal Bias in Language Generation
16:00 - 16:40 Invited Talk: Yulia Tsvetkov: Proactive NLP: How to Prevent Social and Ethical Problems in NLP Systems?
16:40 - 17:00 Contributed Talk: Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions
17:00 - 17:20 Contributed Talk: FERMI: Fair Empirical Risk Minimization Via Exponential Rényi Mutual Information
17:20 - 17:40 Contributed Talk: Auditing AI models for Verified Deployment under Semantic Specifications
18:00 - 19:00 Poster Sessions at Gathertown. link

Organizing Committee

Chaowei Xiao

Xueru Zhang

Jieyu Zhao

Cihang Xie

Xinyun Chen

Senior Organizing Committee

Anima Anandkumar

Bo Li

Mingyan Liu

Dawn Song

Raquel Urtasun

Program Committee

  • Aishan Liu (Beihang University)
  • Anqi Liu (Caltech)
  • Akshayvarun Subramanya (UMBC)
  • Alexandra Chouldechova (CMU)
  • Aniruddha Saha(University of Maryland Baltimore County)
  • Anshuman Suri (University of Virginia)
  • Bo Ji (Virginia Tech)
  • Boxin Wang (University of Illinois at Urbana-Champaign)
  • Chen Zhu (University of Maryland)
  • Chirag Agarwal (Harvard University)
  • Chulin Xie (University of Illinois at Urbana-Champaign)
  • Hongyang Zhang (TTIC)
  • Huan Zhang (UCLA)
  • Jamie Hayes (Google DeepMind)
  • Jia Liu (Ohio State University)
  • Jiachen Sun (University of Michigan)
  • Josiah Wong (Stanford University)
  • Juba Ziani (University of Pennsylvania)
  • Junheng Hao (UCLA)
  • Kexin Rong (Stanford University)
  • Kun Jin (University of Michigan, Ann Arbor)
  • Maura Pintor (University of Cagliari)
  • Mohammad Mahdi Khalili (University of Delaware)
  • Muhammad Awais (Kyung-Hee University)
  • Nataniel Ruiz (Boston University)
  • Parinaz Naghizadeh (Ohio State University)
  • Rajkumar Theagarajan (University of California, Riverside)
  • Sravanti Addepalli (Indian Institute of Science)
  • Sunipa Dev (University of Utah)
  • Wenxiao Wang (Tsinghua University)
  • Won Park (University of Michigan)
  • Xinchen Yan (Uber ATG)
  • Xingjun Ma (Deakin University)
  • Xinlei Pan (UC Berkeley)
  • Xinwei Zhao (Drexel University)
  • Xueru Zhang (University of Michigan)
  • Yingwei Li (Johns Hopkins University)
  • Yizhou Sun (UCLA)
  • Yulong Cao (University of Michigan, Ann Arbor)
  • Yuzhe Yang (MIT)
  • Zelun Luo (Stanford University)
  • Zhiding Yu (NVIDIA)
  • Important Dates

    Call For Papers

    Submission deadline: June 10, 2021 Anywhere on Earth (AoE)

    Notification sent to authors: July 10, 2021 Anywhere on Earth (AoE)

    Submission server:

    The workshop will include contributed papers. The workshop will be completely virtual. We will update the details later.

    We invite submissions on any aspect of machine learning that relates to fairness, ethics, transparency, interpretability, security, and privacy. This includes, but is not limited to:

    Submission Format: We welcome submissions up to 4 pages in ICML Proceedings format (double-blind), excluding references and appendix. Style files and an example paper are available. We allow an unlimited number of pages for references and supplementary material, but reviewers are not required to review the supplementary material. Unless indicated by the authors, we will provide PDFs of all accepted papers on There will be no archival proceedings. We are using CMT3 to manage submissions.