AI for Research and Scalable, Efficient Systems

CHF 96.35
Auf Lager
SKU
N87ECSQM0KM
Stock 1 Verfügbar
Geliefert zwischen Mi., 28.01.2026 und Do., 29.01.2026

Details

This book constitutes the proceedings of the Second International Workshop, AI4Research 2025, and First International Workshop, SEAS 2025, which were held in conjunction with AAAI 2025, Philadelphia, PA, USA, during February 25March 4, 2025.

AI4Research 2025 presented 8 full papers from 35 submissions. The papers covered diverse areas such as agent debate evaluation, taxonomy expansion, hypothesis generation, AI4Research benchmarks, caption generation, drug discovery, and financial auditing.

SEAS 2025 accepted 7 full papers from 17 submissions. These papers explore the efficiency and scalability of AI models.


Inhalt

.- AI4Research 2025.

.- ResearchCodeAgent: An LLM Multi-Agent System for Automated Codification of Research Methodologies.

.- LLMs Tackle Meta-Analysis: Automating Scientific Hypothesis Generation with Statistical Rigor.

.- AuditBench: A Benchmark for Large Language Models in Financial Statement Auditing.

.- Clustering Time Series Data with Gaussian Mixture Embeddings in a Graph Autoencoder Framework.

.- Empowering AI as Autonomous Researchers: Evaluating LLMs in Generating Novel Research Ideas through Automated Metrics.

.- Multi-LLM Collaborative Caption Generation in Scientific Documents.

.- CypEGAT: A Deep Learning Framework Integrating Protein Language Model and Graph Attention Networks for Enhanced CYP450s Substrate Prediction.

.- Understanding How Paper Writers Use AI-Generated Captions in Figure Caption Writing.

.- SEAS 2025.

.- ssProp: Energy-Efficient Training for Convolutional Neural Networks with Scheduled Sparse Back Propagation.

.- Knowledge Distillation with Training Wheels.

.- PickLLM: Context-Aware RL-Assisted Large Language Model Routing.

.- ZNorm: Z-Score Gradient Normalization Accelerating Skip-Connected Network Training without Architectural Modification.

.- The Impact of Multilingual Model Scaling on Seen and Unseen Language Performance.

.- Information Consistent Pruning: How to Efficiently Search for Sparse Networks?.

.- Efficient Image Similarity Search with Quadtrees.

Weitere Informationen

  • Allgemeine Informationen
    • GTIN 09789819689118
    • Genre Information Technology
    • Editor Qingyun Wang, Wenpeng Yin, Abhishek Aich, Yumin Suh, Kuan-Chuan Peng
    • Lesemotiv Verstehen
    • Anzahl Seiten 316
    • Größe H235mm x B155mm x T18mm
    • Jahr 2025
    • EAN 9789819689118
    • Format Kartonierter Einband
    • ISBN 9819689112
    • Veröffentlichung 01.07.2025
    • Titel AI for Research and Scalable, Efficient Systems
    • Untertitel Second International Workshop, AI4Research 2025, and First International Workshop, SEAS 2025, Held in Conjunction with AAAI 2025, Philadelphia, PA, USA, February 25-March 4, 2025, Proceedings
    • Gewicht 482g
    • Herausgeber Springer
    • Sprache Englisch

Bewertungen

Schreiben Sie eine Bewertung
Nur registrierte Benutzer können Bewertungen schreiben. Bitte loggen Sie sich ein oder erstellen Sie ein Konto.
Made with ♥ in Switzerland | ©2025 Avento by Gametime AG
Gametime AG | Hohlstrasse 216 | 8004 Zürich | Schweiz | UID: CHE-112.967.470
Kundenservice: customerservice@avento.shop | Tel: +41 44 248 38 38