Robust Explainable AI

CHF 64.70
Auf Lager
SKU
ESBOH853VBG
Stock 1 Verfügbar
Geliefert zwischen Di., 25.11.2025 und Mi., 26.11.2025

Details

The area of Explainable Artificial Intelligence (XAI) is concerned with providing methods and tools to improve the interpretability of black-box learning models. While several approaches exist to generate explanations, they are often lacking robustness, e.g., they may produce completely different explanations for similar events. This phenomenon has troubling implications, as lack of robustness indicates that explanations are not capturing the underlying decision-making process of a model and thus cannot be trusted.

This book aims at introducing Robust Explainable AI, a rapidly growing field whose focus is to ensure that explanations for machine learning models adhere to the highest robustness standards. We will introduce the most important concepts, methodologies, and results in the field, with a particular focus on techniques developed for feature attribution methods and counterfactual explanations for deep neural networks.

As prerequisites, a certain familiarity with neural networks and approaches within XAI is desirable but not mandatory. The book is designed to be self-contained, and relevant concepts will be introduced when needed, together with examples to ensure a successful learning experience.


The book is the first to introduce Robust Explainable AI, a rapidly growing field Is designed to be self-contained, a familiarity with neural networks or XAI being desirable but not mandatory Presents the most important methods on feature attribution and counterfactual explanations for deep neural networks

Autorentext

Francesco Leofante is a researcher affiliated with the Centre for Explainable AI at Imperial College. His research focuses on explainable AI, with special emphasis on counterfactual explanations for AI-based decision-making. His recent work highlighted several vulnerabilities of counterfactual explanations and proposed innovative solutions to improve their robustness.

Matthew Wicker is an Assistant Professor (Lecturer) at Imperial College London and a Research Associate at The Alan Turing Institute. He works on formal verification of trustworthy machine learning properties with collaborators form academia and industry. His work focuses on provable guarantees for diverse notions of trustworthiness for machine learning models in order to enable responsible deployment.


Inhalt

Foreword.- Preface.- Acknowledgements.- 1. Introduction.- 2. Explainability in Machine Learning: Preliminaries & Overview.- 3. Robustness of Counterfactual Explanations.- 4. Robustness of Saliency-Based Explanations.

Weitere Informationen

  • Allgemeine Informationen
    • GTIN 09783031890215
    • Genre Information Technology
    • Lesemotiv Verstehen
    • Anzahl Seiten 84
    • Größe H235mm x B155mm x T6mm
    • Jahr 2025
    • EAN 9783031890215
    • Format Kartonierter Einband
    • ISBN 3031890213
    • Veröffentlichung 25.05.2025
    • Titel Robust Explainable AI
    • Autor Matthew Wicker , Francesco Leofante
    • Untertitel SpringerBriefs in Intelligent Systems
    • Gewicht 143g
    • Herausgeber Springer Nature Switzerland
    • Sprache Englisch

Bewertungen

Schreiben Sie eine Bewertung
Nur registrierte Benutzer können Bewertungen schreiben. Bitte loggen Sie sich ein oder erstellen Sie ein Konto.
Made with ♥ in Switzerland | ©2025 Avento by Gametime AG
Gametime AG | Hohlstrasse 216 | 8004 Zürich | Schweiz | UID: CHE-112.967.470