Guidotti / Schmid / Longo | Explainable Artificial Intelligence | Buch | 978-3-032-08326-5 | www.sack.de

Buch, Englisch, 448 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 703 g

Reihe: Communications in Computer and Information Science

Guidotti / Schmid / Longo

Explainable Artificial Intelligence

Third World Conference, xAI 2025, Istanbul, Turkey, July 9-11, 2025, Proceedings, Part III
Erscheinungsjahr 2025
ISBN: 978-3-032-08326-5
Verlag: Springer

Third World Conference, xAI 2025, Istanbul, Turkey, July 9-11, 2025, Proceedings, Part III

Buch, Englisch, 448 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 703 g

Reihe: Communications in Computer and Information Science

ISBN: 978-3-032-08326-5
Verlag: Springer


This open access five-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2025, held in Istanbul, Turkey, during July 2025. 


The 96 revised full papers presented in these proceedings were carefully reviewed and selected from 224 submissions. The papers are organized in the following topical sections:

Volume I:

Concept-based Explainable AI; human-centered Explainability; explainability, privacy, and fairness in trustworthy AI; and XAI in healthcare.

Volume II:

Rule-based XAI systems & actionable explainable AI; features importance-based XAI; novel post-hoc & ante-hoc XAI approaches; and XAI for scientific discovery.

Volume III:

Generative AI meets explainable AI; Intrinsically interpretable explainable AI; benchmarking and XAI evaluation measures; and XAI for representational alignment.

Volume IV:

XAI in computer vision; counterfactuals in XAI; explainable sequential decision making; and explainable AI in finance & legal frameworks for XAI technologies.

Volume V:

Applications of XAI; human-centered XAI & argumentation; explainable and interactive hybrid decision making; and uncertainty in explainable AI.

Guidotti / Schmid / Longo Explainable Artificial Intelligence jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Generative AI meets Explainable AI.- Reasoning-Grounded Natural Language Explanations for Language Models.- What's Wrong with Your Synthetic Tabular Data? Using Explainable AI to Evaluate Generative Models.- Explainable Optimization: Leveraging Large Language Models for User-Friendly Explanations.- Large Language Models as Attribution Regularizers for Efficient Model Training.- GraphXAIN: Narratives to Explain Graph Neural Networks.- Intrinsically Interpretable Explainable AI.- MSL: Multiclass Scoring Lists for Interpretable Incremental Decision Making.- Interpretable World Model Imaginations as Deep Reinforcement Learning Explanation.- Unsupervised and Interpretable Detection of User Personalities in Online Social Networks.- An Interpretable Data-Driven Approach for Modeling Toxic Users Via Feature Extraction.- Assessing and Quantifying Perceived Trust in Interpretable Clinical Decision Support.- Benchmarking and XAI Evaluation Measures.- When can you Trust your Explanations? A Robustness Analysis on Feature Importances.- XAIEV – a Framework for the Evaluation of XAI-Algorithms for Image Classification.- From Input to Insight: Probing the Reasoning of Attention-based MIL Models.- Uncovering the Structure of Explanation Quality with Spectral Analysis.- Consolidating Explanation Stability Metrics.- XAI for Representational Alignment.- Reduction of Ocular Artefacts in EEG Signals Based on Interpretation of Variational Autoencoder Latent Space.- Syntax-Guided Metric-Based Class Activation Mapping.- Which Direction to Choose? An Analysis on the Representation Power of Self-Supervised ViTs in Downstream Tasks.- XpertAI: Uncovering Regression Model Strategies for Sub-manifolds.- An XAI-based Analysis of Shortcut Learning in Neural Networks.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.