Advanced Intelligent Computing Technology and Applications

CHF 104.35
Auf Lager
SKU
CDDS3I27H7N
Stock 1 Verfügbar
Geliefert zwischen Fr., 27.02.2026 und Mo., 02.03.2026

Details

The 20-volume set LNCS 15842-15861, together with the 4-volume set LNAI 15862-15865 and the 4-volume set LNBI 15866-15869, constitutes the refereed proceedings of the 21st International Conference on Intelligent Computing, ICIC 2025, held in Ningbo, China, during July 26-29, 2025.

The 1206 papers presented in these proceedings books were carefully reviewed and selected from 4032 submissions. They deal with emerging and challenging topics in artificial intelligence, machine learning, pattern recognition, bioinformatics, and computational biology.

Klappentext

.- Natural Language Processing and Computational Linguistics. .- Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning. .- ModalLogicBench: Unveiling Modal Logic Reasoning Abilities of Large Language Models. .- A Source Template-based Data Augmentation Method for Low-Resource Neural Machine Translation. .- Exploring Behavior-Driven Development for Code Generation. .- LLM- Based Data Synthesis and Distillation for High-Quality Text-to-SQL Training. .- External Knowledge-Enhanced Semi-supervised Multi-Label Short Text Classification. .- Bridging Knowledge Gaps: Fine-Tuned RAG Frameworks for Biomedical Evidence-Based Question Answering. .- MTAOS: Aspect-Level Opinion Summarization with Opinion Phrase Masking. .- COMLoRA: A chain-based LoRA architecture combined with MoE. .- Sentence Trunk Fusion for Neural Machine Translation. .- ProCFD: Towards Robust Multimodal Sentiment Analysis through Prototype Fusion and Contrastive Feature Decomposition. .- T3: A Novel Zero-shot Transfer Learning Framework Iteratively Training on an Assistant Task for a Target Task. .- ALMP: Automatic Layer-by-layer Mixed-Precision Quantization For Large Language Models. .- Can we employ LLM to meta-evaluate LLM-based evaluators? A Preliminary Study. .- EmbSpeech: A Unified Framework Towards Low-Resource Zero-Shot Speech Synthesis. .- SViQA: A Unified Speech-Vision Multimodal Model for Textless Visual Question Answering. .- Event Causality Extraction via Label-Aware Multi-Prompt Generation Network. .- Improving Low-Resource Neural Machine Translation with Dependency Distance-based Self-Attention. .- Automated Coding Utterances toward Chinese Course Core Competence with Large Language Models. .- Introspective Reward Modeling via Inverse Reinforcement Learning for LLM Alignment. .- BERTFAN: Multi-Layer Feature Fusion and Data Augmentation for Sentiment Analysis. .- Instruction Tuning with Data Augmentation for Event Argument Extraction. .- EQAA-MAC: Enhancing Question Answering Accuracy via Multi-Agent Cooperation in IT Operations. .- Cross-domain Constituency Parsing with Multi-LLM Debate. .- Unified Option Generation for Zero- and Few-shot Emotion and Cause Analysis in Dialogues. .- Open-World Knowledge Augmentation for Zero-Shot Information Extraction in LLMs. .- Prompting Large Models for Knowledge and Reasoning Augmentation in KB-VQA. .- IterSelectTune: An Iterative Data Selection Framework for Efficient Instruction Tuning. .- Utilize unbiased contrastive learning to enhance the key emotional features in low-resource sentiment analysis. .- Post-training Performance Boosting Method for Code Large Language Models via Model Merging. .- Automated Construction of High-quality Evaluation Datasets Based on LLMs. .- Enhancing Code Generation for Large Language Models Using Fine-Grained Distillation. .- Morphological Recombination-Based Neural Machine Translation with Self Supervised Data Augmentation. .- From Coarse to Fine: Chinese Spelling Correction Based on LoRA Technology and Multi-Agent Collaboration. .- Using External knowledge to Enhanced PLM for Semantic Matching. .- UnCert-CoT: Uncertainty-Aware Chain-of-Thought for Code Generation with Large Language Model. .- Towards Reliable Large Language Models: A Survey on Hallucination Detection. .- KPEE: A Two-Stage Proposal-Based Reformulation of Event Extraction. .- Morphology-Driven Meta-Adapter for Low-Resource Mongolian Sentiment Analysis. .- Knowledge Graph Completion Combining Dynamic Learnability and Contrastive Learning. .- FlexKG: A Flexible Framework for Enhanced Reasoning over Knowledge Graph with Large Language Model. .- Enhancing


Inhalt

.- Natural Language Processing and Computational Linguistics.
.- Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning.
.- ModalLogicBench: Unveiling Modal Logic Reasoning Abilities of Large Language Models.
.- A Source Template-based Data Augmentation Method for Low-Resource Neural Machine Translation.
.- Exploring Behavior-Driven Development for Code Generation.
.- LLM- Based Data Synthesis and Distillation for High-Quality Text-to-SQL Training.
.- External Knowledge-Enhanced Semi-supervised Multi-Label Short Text Classification.
.- Bridging Knowledge Gaps: Fine-Tuned RAG Frameworks for Biomedical Evidence-Based Question Answering.
.- MTAOS: Aspect-Level Opinion Summarization with Opinion Phrase Masking.
.- COMLoRA: A chain-based LoRA architecture combined with MoE.
.- Sentence Trunk Fusion for Neural Machine Translation.
.- ProCFD: Towards Robust Multimodal Sentiment Analysis through Prototype Fusion and Contrastive Feature Decomposition.
.- T3: A Novel Zero-shot Transfer Learning Framework Iteratively Training on an Assistant Task for a Target Task.
.- ALMP: Automatic Layer-by-layer Mixed-Precision Quantization For Large Language Models.
.- Can we employ LLM to meta-evaluate LLM-based evaluators? A Preliminary Study.
.- EmbSpeech: A Unified Framework Towards Low-Resource Zero-Shot Speech Synthesis.
.- SViQA: A Unified Speech-Vision Multimodal Model for Textless Visual Question Answering.
.- Event Causality Extraction via Label-Aware Multi-Prompt Generation Network.
.- Improving Low-Resource Neural Machine Translation with Dependency Distance-based Self-Attention.
.- Automated Coding Utterances toward Chinese Course Core Competence with Large Language Models.
.- Introspective Reward Modeling via Inverse Reinforcement Learning for LLM Alignment.
.- BERTFAN: Multi-Layer Feature Fusion and Data Augmentation for Sentiment Analysis.
.- Instruction Tuning with Data Augmentation for Event Argument Extraction.
.- EQAA-MAC: Enhancing Question Answering Accuracy via Multi-Agent Cooperation in IT Operations.
.- Cross-domain Constituency Parsing with Multi-LLM Debate.
.- Unified Option Generation for Zero- and Few-shot Emotion and Cause Analysis in Dialogues.
.- Open-World Knowledge Augmentation for Zero-Shot Information Extraction in LLMs.
.- Prompting Large Models for Knowledge and Reasoning Augmentation in KB-VQA.
.- IterSelectTune: An Iterative Data Selection Framework for Efficient Instruction Tuning.
.- Utilize unbiased contrastive learning to enhance the key emotional features in low-resource sentiment analysis.
.- Post-training Performance Boosting Method for Code Large Language Models via Model Merging.
.- Automated Construction of High-quality Evaluation Datasets Based on LLMs.
.- Enhancing Code Generation for Large Language Models Using Fine-Grained Distillation.
.- Morphological Recombination-Based Neural Machine Translation with Self Supervised Data Augmentation.
.- From Coarse to Fine: Chinese Spelling Correction Based on LoRA Technology and Multi-Agent Collaboration.
.- Using External knowledge to Enhanced PLM for Semantic Matching.
.- UnCert-CoT: Uncertainty-Aware Chain-of-Thought for Code Generation with Large Language Model.
.- Towards Reliable Large Language Models: A Survey on Hallucination Detection.
.- KPEE: A Two-Stage Proposal-Based Reformulation of Event Extraction.
.- Morphology-Driven Meta-Adapter for Low-Resource Mongolian Sentiment Analysis.
.- Knowledge Graph Completion Combining Dynamic Learnability and Contrastive Learning.
.- FlexKG: A Flexible Framework for Enhanced Reasoning over Kn…

Weitere Informationen

  • Allgemeine Informationen
    • GTIN 09789819500130
    • Genre Technology Encyclopedias
    • Editor De-Shuang Huang, Bo Li, Haiming Chen, Chuanlei Zhang
    • Lesemotiv Verstehen
    • Anzahl Seiten 560
    • Herausgeber Springer
    • Größe H235mm x B155mm x T30mm
    • Jahr 2025
    • EAN 9789819500130
    • Format Kartonierter Einband
    • ISBN 9819500133
    • Veröffentlichung 25.07.2025
    • Titel Advanced Intelligent Computing Technology and Applications
    • Untertitel 21st International Conference, ICIC 2025, Ningbo, China, July 26-29, 2025, Proceedings, Part XXIII
    • Gewicht 838g
    • Sprache Englisch

Bewertungen

Schreiben Sie eine Bewertung
Nur registrierte Benutzer können Bewertungen schreiben. Bitte loggen Sie sich ein oder erstellen Sie ein Konto.
Made with ♥ in Switzerland | ©2025 Avento by Gametime AG
Gametime AG | Hohlstrasse 216 | 8004 Zürich | Schweiz | UID: CHE-112.967.470
Kundenservice: customerservice@avento.shop | Tel: +41 44 248 38 38