Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

CHF 121.95
Auf Lager
SKU
QVLM7C1C180
Stock 1 Verfügbar
Geliefert zwischen Fr., 27.02.2026 und Mo., 02.03.2026

Details

The development of intelligent systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to intelligent machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.

The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.


Assesses the current state of research on Explainable AI (XAI) Provides a snapshot of interpretable AI techniques Reflects the current discourse and provides directions of future development

Inhalt
Towards Explainable Artificial Intelligence.- Transparency: Motivations and Challenges.- Interpretability in Intelligent Systems: A New Concept?.- Understanding Neural Networks via Feature Visualization: A Survey.- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation.- Unsupervised Discrete Representation Learning.- Towards Reverse-Engineering Black-Box Neural Networks.- Explanations for Attributing Deep Neural Network Predictions.- Gradient-Based Attribution Methods.- Layer-Wise Relevance Propagation: An Overview.- Explaining and Interpreting LSTMs.- Comparing the Interpretability of Deep Networks via Network Dissection.- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison.- The (Un)reliability of Saliency Methods.- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation.- Understanding Patch-Based Learningof Video Data by Explaining Predictions.- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks.- Interpretable Deep Learning in Drug Discovery.- Neural Hydrology: Interpreting LSTMs in Hydrology.- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI.- Current Advances in Neural Decoding.- Software and Application Patterns for Explanation Methods.

Weitere Informationen

  • Allgemeine Informationen
    • GTIN 09783030289539
    • Editor Wojciech Samek, Grégoire Montavon, Klaus-Robert Müller, Lars Kai Hansen, Andrea Vedaldi
    • Sprache Englisch
    • Auflage 1st edition 2019
    • Größe H235mm x B155mm x T25mm
    • Jahr 2019
    • EAN 9783030289539
    • Format Kartonierter Einband
    • ISBN 3030289532
    • Veröffentlichung 30.08.2019
    • Titel Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
    • Untertitel Lecture Notes in Computer Science 11700 - Lecture Notes in Artificial Intelligen
    • Gewicht 680g
    • Herausgeber Springer International Publishing
    • Anzahl Seiten 452
    • Lesemotiv Verstehen
    • Genre Informatik

Bewertungen

Schreiben Sie eine Bewertung
Nur registrierte Benutzer können Bewertungen schreiben. Bitte loggen Sie sich ein oder erstellen Sie ein Konto.
Made with ♥ in Switzerland | ©2025 Avento by Gametime AG
Gametime AG | Hohlstrasse 216 | 8004 Zürich | Schweiz | UID: CHE-112.967.470
Kundenservice: customerservice@avento.shop | Tel: +41 44 248 38 38