Knowledge Distillation in Computer Vision

CHF 71.75
Auf Lager
SKU
MF3NB3FQ64N
Stock 1 Verfügbar
Geliefert zwischen Mi., 28.01.2026 und Do., 29.01.2026

Details

Discover the cutting-edge advancements in knowledge distillation for computer vision within this comprehensive monograph. As neural networks become increasingly complex, the demand for efficient and lightweight models grows critical, especially for real-world applications. This book uniquely bridges the gap between academic research and industrial implementation, exploring innovative methods to compress and accelerate deep neural networks without sacrificing accuracy. It addresses two fundamental problems in knowledge distillation: constructing effective student and teacher models and selecting the appropriate knowledge to distill. Presenting groundbreaking research on self-distillation and task-irrelevant knowledge distillation, the book offers new perspectives on model optimization. Readers will gain insights into applying these techniques across a wide range of visual tasks, from 2D and 3D object detection to image generation, effectively bridging the gap between AI research and practical deployment. By engaging with this text, readers will learn to enhance model performance, reduce computational costs, and improve model robustness. This book is ideal for researchers, practitioners, and advanced students with a background in computer vision and deep learning. Equip yourself with the knowledge to design and implement knowledge distillation, thereby improving the efficiency of computer vision models.


Offers a cutting edge insights in knowledge distillation provides practical applications of knowledge distillation across computer vision task crucial for software developers by giving details methods to compress and optimize AI models

Autorentext

Dr. Zhang Linfeng is the assistant professor in School of Artificial Intellignce, Shanghai Jiao Tong University. He graduated from the Institute of Interdisciplinary Information Sciences at Tsinghua University with a doctoral degree in Computer Science and Technology, specializing in computer vision model compression and acceleration. His doctoral dissertation, "Structured Knowledge Distillation: Towards Efficient Visual Intelligence," was recognized as an outstanding doctoral dissertation by Tsinghua University. He has served as a reviewer for more than a dozen top academic conferences and journals, including IEEE TPAMI, NeurIPS, ICLR, and CVPR for several consecutive years. He has published more than 20 high-level academic papers as first author or corresponding author. According to Google Scholar, his papers have been cited 2,300 times, with the highest citation count for a single first-authored paper exceeding 1,000 times. At the 2019 ICCV conference, he first proposed the Self-Distillation algorithm, which is one of the representative works in the field of knowledge distillation. He has successfully applied knowledge distillation algorithms to various visual tasks such as object detection, instance segmentation, and image generation, as well as to different types of visual data including images, multi-view images, point clouds, and videos to achieve compression and acceleration effects of visual models. Meanwhile, his research achievements have been utilized in the Qiming series chips developed by Polar Bear Technology, Huawei, DiD Global, and Kwai, providing compression and acceleration effects for artificial intelligence models in real industrial scenarios.


Inhalt

"Chapter 1: Introduction".- " Chapter 2: Student and Teacher Models in KD".- " Chapter 3: Distilled Knowledge in KD".- " Chapter 4: Application of KD in High-Level Vision Tasks".- " Chapter 5: Application of KD in Low-Level Vision Tasks".- " Chapter 6: Application of KD beyond Model Compression".- " Chapter 7: Conclusion".

Weitere Informationen

  • Allgemeine Informationen
    • GTIN 09789819503667
    • Genre Information Technology
    • Lesemotiv Verstehen
    • Anzahl Seiten 120
    • Größe H235mm x B155mm
    • Jahr 2026
    • EAN 9789819503667
    • Format Kartonierter Einband
    • ISBN 978-981-9503-66-7
    • Titel Knowledge Distillation in Computer Vision
    • Autor Linfeng Zhang
    • Untertitel SpringerBriefs in Computer Science
    • Herausgeber Springer, Berlin
    • Sprache Englisch

Bewertungen

Schreiben Sie eine Bewertung
Nur registrierte Benutzer können Bewertungen schreiben. Bitte loggen Sie sich ein oder erstellen Sie ein Konto.
Made with ♥ in Switzerland | ©2025 Avento by Gametime AG
Gametime AG | Hohlstrasse 216 | 8004 Zürich | Schweiz | UID: CHE-112.967.470
Kundenservice: customerservice@avento.shop | Tel: +41 44 248 38 38