Skip to the content.

Speakers

Speaker 6 Name

Arijit Khan

Bowling Green State University

Speaker 6 Name

Xiangyu Ke

Zhejiang University

Speaker 6 Name

Yinghui Wu

Case Western Reserve
University

Speaker 6 Name

Francesco Bonchi

CENTAI Institute

Tutorial Info: 🗓️ Sunday, Feb 22, 2026 · 9:00 AM-12:30 PM · 📍 Room 110CD


Abstract

Graph neural networks (GNNs) are deep learning models designed for graph-structured data that have achieved strong results across domains–social networks, knowledge graphs, bioinformatics, transportation, World Wide Web, and finance–on tasks such as node and graph classification, link prediction, entity resolution, question answering, recommendation, and fraud detection. Explaining the decisions of high-performing, yet “black-box” GNNs remains both challenging and essential. The initial five years have produced tremendous progress with many GNN explainers (e.g., GNNExplainer, PGExplainer, SubgraphX, PGMExplainer, GraphLime, GCF-Explainer, CF2, GNN-LRP) that identify the influential nodes, edges, subgraphs, and features aiming to explain the output of GNNs.

We refer to those works as GNN Explainers 1.0, since they provide one-time, final-output explanations and are focused on narrow tasks like node or graph classification, which limits their usefulness for broader, user-centered needs. Practical debugging and accountability require robust, multi-faceted, and GNN’s layer-wise provenance so that data scientists can trace how inputs transform through layers and locate where errors occur. Non-technical stakeholders need explanations that are accessible, configurable, and queryable through familiar interfaces–structured queries, ad-hoc instructions, counterfactual evidence, or natural language–so both experts and non-experts can interactively explore model behavior.

This tutorial surveys latest advances in user-centered GNN explanations that shift focus from merely explaining model outputs to producing actionable, end-user-facing explanations. We show how data mining principles can improve comprehension, usability, and trust, and outline practical strategies for creating configurable, interpretable explanations tailored to diverse stakeholders. We refer to this paradigm as GNN Explainers 2.0. We demonstrate key works under this paradigm, summarize open challenges, and highlight opportunities for the web and data mining community.


Outline & Schedule

Session 1 (9:00 AM - 10:30 AM)

9:00 - 9:30 AM
Arijit Khan
Slides 1-27
Sec 1: Introduction
  • 1.1 Graph Neural Networks (GNNs) and Applications
  • 1.2 Explainability of GNNs: Definitions, Importance, and Challenges
Sec 2: GNN Explainers Categorization
  • 2.1 Post-hoc vs. Intrinsic / Self-explainable
  • 2.2 Global vs. Local
  • 2.3 Class-specific vs. Instance-specific
  • 2.4 Model-specific vs. Model-agnostic
  • 2.5 Forward vs. Backward
  • 2.6 Node-level vs. Edge-level vs. Subgraph-level
  • 2.7 Perturbation vs. Gradient vs. Decomposition vs. Surrogate Models
  • 2.8 Factuals vs. Counterfactuals
Sec 3: GNN Explainers 2.0
9:30 - 10:00 AM
Xiangyu Ke
Slides 28-49
Sec 4: User-centric and Data-driven Explainability Methods for GNNs
  • 4.1 Pattern Mining and Concept Hierarchies
  • 4.2 Model-slicing Explanations
  • 4.6 Efficiency and Interactiveness
10:00 - 10:20 AM
Yinghui Wu
Slides 50-62
  • 4.3 Robust Explanations
  • 4.4 Multi-criteria Explanations (Part I)
10:20 - 10:30 AM
Yinghui Wu + All
Q&A
Q&A (Session 1)
Break (10:30 AM - 11:00 AM)

Session 2 (11:00 AM - 12:30 PM)

11:00 - 11:30 AM
Yinghui Wu
Slides 63-77
  • 4.4 Multi-criteria Explanations (Part II)
11:30 - 12:00 PM
Xiangyu Ke
Slides 78-93
  • 4.5 Declarative Explanatory Queries
  • 4.7 Natural Language Explanations
12:00 - 12:20 PM
Arijit Khan
Slides 94-133
  • 4.8 Counterfactual Explanations
Sec 5: Future Directions
  • 5.1 Downstream Tasks beyond Classification
  • 5.2 Qualitative Evaluation of GNN Explanation
  • 5.3 Explainability for Complex GNNs
  • 5.4 Explanation with Privacy Concern
  • 5.5 Explanation as Actionable Recourse
  • 5.6 Multi-modal Explanation
12:20 - 12:30 PM
Arijit Khan + All
Q&A
Q&A (Session 2)