Mao / Ren / Yang | Natural Language Processing and Chinese Computing | Buch | 978-981-953351-0 | www.sack.de

Buch, Englisch, 555 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 879 g

Reihe: Lecture Notes in Computer Science

Mao / Ren / Yang

Natural Language Processing and Chinese Computing

14th National CCF Conference, NLPCC 2025, Urumqi, China, August 7-9, 2025, Proceedings, Part IV
Erscheinungsjahr 2025
ISBN: 978-981-953351-0
Verlag: Springer

14th National CCF Conference, NLPCC 2025, Urumqi, China, August 7-9, 2025, Proceedings, Part IV

Buch, Englisch, 555 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 879 g

Reihe: Lecture Notes in Computer Science

ISBN: 978-981-953351-0
Verlag: Springer


The four-volume set LNAI 16102 - 16105 constitutes the refereed proceedings of the 14th CCF National Conference on Natural Language Processing and Chinese Computing, NLPCC 2025, held in Urumqi, China, during August 7–9, 2025.

The 152 full papers and 26 evaluation workshop papers included in these proceedings were carefully reviewed and selected from 505 submissions. They were focused on the following topical sections:

Part I : Information Extraction and Knowledge Graph & Large Language Models and Agents.
Part II : Multimodality and Explainability & NLP Applications / Text Mining.
Part III :  IR / Dialogue Systems / Question Answering; Machine Translation and Multilinguality & Sentiment analysis / Argumentation Mining / Social Media.
Part IV : Machine Learning for NLP; Fundamentals of NLP; Summarization and Generation; Others &
Evaluation Workshop.

Mao / Ren / Yang Natural Language Processing and Chinese Computing jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


.- Machine Learning for NLP
.- HandDiff-GAN: Handwriting Diffusion-Enhanced Generative Adversarial Networks for Character Generation.
.- Multi-Event Temporal Relation Extraction by Ranking.
.- Very-Long-Distance Dependency Capturing Evaluation via Language Modeling based on Gender Consistency.
.- InfoKANCSE: Enhancing Contrastive Sentence Embeddings with KAN and Information-Aware Aggregation.
.- Zero-Shot Chain-of-Thought Reasoning Guided by Evolutionary Algorithms in Large Language Models.
.- FedETE: Privacy-preserving Federated Event Trigger Extraction.
.- Fundamentals of NLP
.- Parallel Task Planning via Model Collaboration.
.- SageRep: Enhancing Unsupervised Sentence Representations via Layer-Adaptive Self-Knowledge Distillation.
.- Improving Cross-Document Event Coreference Resolution with Word Sense Disambiguation and Large Language Models.
.- Summarization and Generation
.- FactoScalpel: Locating and Injecting Knowledge in Transformer to Enhance Factual Consistency in Abstractive Summarization.
.- Emoji-Stega: An Emoji-Powered Linguistic Steganography Framework for Social Networks.
.- Knowledge-enhanced Text Summaries for Factual Problems.
.- DiSG: A Discourse Structure-aware Multi-Stage Approach for Long Tibetan Text Summarization.
.- Detecting and Correcting Hallucinations in LLMs via Substantive Uncertainty and Iterative Validation.
.- Others
.- Directional asymmetry in the perception of Mandarin Chinese vowels by native English speakers.
.- Thoughts Behind Attack: Enhancing Security Against Jailbreak Attacks Using Chain-of-Thought.
.- Unlocking the Power of Large Language Models for Multi-Table Entity Matching.
.- Mongolian Speech Recognition Based on Semi-Supervised Learning and Syllable Subword Modeling Units.
.- CEBFL: A Cost-Effectiveness-Based Approach to Defending Against Gradient Attack in Federated Learning.
.- IRSC: A Zero-shot Evaluation Benchmark for Information Retrieval base on Semantic Comprehension in Retrieval-Augmented Generation Scenarios.
.- Evaluation Workshop
.- Overview of the NLPCC 2025 Shared Task 1: LLM-Generated Text Detection.
.- When Less is More: Minimal Prompts with LoRA for LLM Text Detection.
.- EnsemJudge: Enhancing Reliability in Chinese LLM-Generated Text Detection through Diverse Model Ensembles.
.- LOW-COST-AI-DETECTOR: An Efficient and Cost-Effective LLM-Generated Chinese Text Detection Model for NLPCC2025 Shared-Task 1.
.- Overview of the NLPCC 2025 Shared Task2: Evaluation of Essay On-Topic Graded Comments(EOTGC).
.- Optimizing Automated Essay On-Topic Graded Comments via LLM-Based Prompt Augmentation.
.- Decomposing Topic Relevance: A Multi-Agent LLM Approach for Automated Essay Scoring and Feedback.
.- Overview of the NLPCC 2025 Shared Task 3: Comprehensive Argument Analysis for Chinese Argumentative Essay.
.- Knowledge-Enhanced and Event-Rule Guided Framework for Fine-Grained Argument Mining in Chinese Essays.
.- Comprehensive Argument Mining for Chinese Argumentative Essays Using Large Language Models.
.- Overview of the NLPCC 2025 Shared Task 4: Multi-modal, Multilingual, Multi-hop Medical Instructional Video Question Answering Challenge.
.- Multi-hop Knowledge-enhanced Query Reasoning for Multi-modal Medical Video QA.
.- Hierarchical Indexing with Knowledge Enrichment for Multilingual Video Corpus Retrieval.
.- Hierarchical RAG-driven Multi-hop Reasoning for Medical Video Question Answering.
.- Overview of NLPCC 2025 Shared Task 5: Chinese Government Text Correction with Knowledge Bases.
.- CGT-Corrector:Chinese Government Text Correction with Knowledge Bases.
.- KARTC: Knowledge-Aware Rewriting with Large Language Models for Chinese Government Text Correction.
.- Overview of the NLPCC 2025 Shared Task: Gender Bias Mitigation Challenge.
.- From Detection to Mitigation: Addressing Gender Bias in Chinese Texts via Efficient Tuning and Voting-Based Rebalancing.
.- Qwen-Gender: A Chain-of-Thought Based Multi-Task Gender Bias Mitigation System.
.- Detection, Classification, Mitigation of Gender Bias in Large Language Models.
.- Overview of the NLPCC 2025 Shared Task 8: Personalized Emotional Support Conversation.
.- Optimizing LLMs for Personalized Emotional Support with Future Cues and Response Diversity.
.- Empathetic Dialogue Generation with LLMs for Emotional Support.
.- Role-Guided Synthesis for Emotional Support: A Two-Stage Framework with Multi-Agent Dialogue Synthesis.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.