ICML 2021 Workshop on

Theoretic Foundation, Criticism, and Application Trend of Explainable AI


Deep neural networks (DNNs) have undoubtedly brought great success to a wide range of applications in computer vision, computational linguistics, and AI. However, foundational principles underlying the DNNs’ success and their resilience to adversarial attacks are still largely missing. Interpreting and theorizing the internal mechanisms of DNNs becomes a compelling yet controversial topic.

Unlike previous workshops or tutorials on explainable AI (XAI), the proposed workshop pays a special interests in theoretic foundations, limitations, and new application trends in the scope of XAI. These issues reflect new bottlenecks in the future development of XAI, for example: (1) no theoretic definition of XAI and no solid and widely-used formulation for even a specific explanation task. (2) No sophisticated formulation of the essence of “semantics” encoded in a DNN. (3) How to bridge the gap between connectionism and symbolism in AI research has not been sophisticatedly explored. (4) How to evaluate the correctness and trustworthiness of an explanation result is still an open problem. (5) How to bridge the intuitive explanation (e.g., the attribution/importance-based explanation) and a DNN’s representation capacity (e.g., the generalization power) is still a significant challenge. (6) Using the explanation to guide the architecture design or substantially boost the performance of a DNN is a bottleneck.

Therefore, this workshop aims to bring together researchers, engineers as well as industrial practitioners, who concern about the interpretability, safety, and reliability of artificial intelligence. In this workshop, we hope to use a broad discussion on the above bottleneck issues to explore new critical and constructive views of the future development of XAI. Research outcomes are also expected to profoundly influences critical industrial applications such as medical diagnosis, finance, and autonomous driving.


July 23, 2021 (In UTC)

12:00 - 12:02 pm Welcome
12:02 - 12:52 pm Invited talk: Dr. Song-Chun Zhu, "Explainable AI: How Machines Gain Justified Trust from Humans"
12:52 - 01:50 pm Invited talk: Dr. Klaus-robert Mueller, Dr. Wojciech Samek, Dr. Gregoire Montavonz, "Toward Explainable AI"
01:50 - 02:40 pm Invited talk: Dr. Finale Doshi-Velez, "Interpretability in High Dimensions: Concept Bottlenecks and Beyond"
02:40 - 05:00 pm Poster session
05:00 - 05:50 pm Invited talk: Dr. Mukund Sundararajan, "Analysis Not Explainability"
05:50 - 06:40 pm Invited talk: Dr. Cynthia Rudin, "Interpretable Machine Learning: Fundamental Principles And 10 Grand Challenges"
06:40 - 07:30 pm Invited talk: Dr. Yan Liu, "Deciphering Neural Networks through the Lenses of Feature Interactions"
07:30 - 09:30 Poster Session

Accepted papers

Click the link for information about accepted papers, including papers, videos and slides:

Proceeding of the workshop


Topics of interests include, but are not limited to, following fields

All above topics are core issues in the development of explainable AI and have received increasing attention in recent years. We believe the workshop would be of broad interest to the ICML community.

Calling for papers

This workshop is a one-day event, which will include invited talks, contributed talks and poster presentations of accepted papers.
We are calling for extended abstracts with 2–4 pages (excluding references). Submissions are required to stick to the ICML format. Papers accepted by this workshop can be re-submitted to other conferences or journals. The submission is not required to be anonymous, but it is also OK if you submit an anonymous paper.
Please submit your papers to https://cmt3.research.microsoft.com/ICMLWXAI2021.
** There were some problems with the submission page before. We have fixed it to allow editing the submission, and allow the submission of supplementary material.
** We have extended the submission deadline due to the connection problem with CMT.


Please contact Quanshi Zhang if you have questions.