Liu / Wang / Ma | Pattern Recognition and Computer Vision | E-Book | www.sack.de
E-Book

E-Book, Englisch, Band 14429, 531 Seiten, eBook

Reihe: Lecture Notes in Computer Science

Liu / Wang / Ma Pattern Recognition and Computer Vision

6th Chinese Conference, PRCV 2023, Xiamen, China, October 13–15, 2023, Proceedings, Part V
1. Auflage 2024
ISBN: 978-981-99-8469-5
Verlag: Springer Singapore
Format: PDF
Kopierschutz: 1 - PDF Watermark

6th Chinese Conference, PRCV 2023, Xiamen, China, October 13–15, 2023, Proceedings, Part V

E-Book, Englisch, Band 14429, 531 Seiten, eBook

Reihe: Lecture Notes in Computer Science

ISBN: 978-981-99-8469-5
Verlag: Springer Singapore
Format: PDF
Kopierschutz: 1 - PDF Watermark



The 13-volume set LNCS 14425-14437 constitutes the refereed proceedings of the 6th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2023, held in Xiamen, China, during October 13–15, 2023.
The 532 full papers presented in these volumes were selected from 1420 submissions. The papers have been organized in the following topical sections: Action Recognition, Multi-Modal Information Processing, 3D Vision and Reconstruction, Character Recognition, Fundamental Theory of Computer Vision, Machine Learning, Vision Problems in Robotics, Autonomous Driving, Pattern Classification and Cluster Analysis, Performance Evaluation and Benchmarks, Remote Sensing Image Interpretation, Biometric Recognition, Face Recognition and Pose Recognition, Structural Pattern Recognition, Computational Photography, Sensing and Display Technology, Video Analysis and Understanding, Vision Applications and Systems, Document Analysis and Recognition, Feature Extraction and Feature Selection, Multimedia Analysis and Reasoning, Optimization and Learning methods, Neural Network and Deep Learning, Low-Level Vision and Image Processing, Object Detection, Tracking and Identification, Medical Image Processing and Analysis. 


Liu / Wang / Ma Pattern Recognition and Computer Vision jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Spoof-guided Image Decomposition for Face Anti-spoofing.- TransFCN: A Novel One-Stage High-Resolution Fingerprint Representation Method.- A Video Face Recognition Leveraging Temporal Information based on Vision Transformer.- Where to Focus: Central Attention-based Face Forgery Detection.- Minimum Assumption Reconstruction Attacks: Rise of Security and Privacy Threats against Face Recognition.- Emotion Recognition via 3D Skeleton based Gait Analysis using Multi-thread Attention Graph Convolutional Networks.- Cross-area Finger Vein Recognition via Hierarchical Sparse Representation.- Non-Local Temporal Modeling for Practical Skeleton-based Gait Recognition.- PalmKeyNet: Palm Template Protection Based on Multi-modal Shared KeyFull Quaternion Matrix-based Multiscale Principal Component Analysis Network for Facial Expression Recognition.- Joint Relation Modeling and Feature Learning for Class-Incremental Facial Expression Recognition.- RepGCN: A Novel Graph Convolution-Based Model for Gait Recognition with Accompanying Behaviors.- KDFAS: Multi-Stage Knowledge Distillation Vision Transformer for Face Anti-Spoofing.- Long Short-Term Perception Network for Dynamic Facial Expression Recognition.- Pyr-HGCN: Pyramid Hybrid Graph Convolutional Network for Gait Emotion Recognition.- A Dual-Path Approach for Gaze Following in Fisheye Meeting Scenes.- Mobile-LRPose: Low-Resolution Representation Learning for Human Pose Estimation in mobile devices Attention and Relative Distance Alignment for Low-Resolution Facial Expression Recognition.- Exploring Frequency Attention Learning and Contrastive Learning for Face Forgery Detection.- An Automatic Depression Detection Method with Cross-Modal Fusion Network and Multi-Head Attention Mechanism.- AU-oriented Expression Decomposition Learning for Facial Expression Recognition.- Accurate Facial Landmark Detector via Multi-scale Transformer.- ASM: Adaptive Sample Mining for In-The-Wild Facial Expression Recognition.- Deep Face Recognition with cosine boundary softmax loss.- A Mix Fusion Spatial-Temporal Network for Facial Expression Recognition.- CTHPose: An Efficient and Effective CNN-Transformer Hybrid Network for Human Pose Estimation.- DuAT: Dual-Aggregation Transformer Network for Medical Image Segmentation.- MS UX-Net: A Multi-scale Depth-wise Convolution Network for Medical Image Segmentation.- AnoCSR–A Convolutional Sparse Reconstructive Noise-Robust Framework for Industrial Anomaly Detection.- Brain tumor image segmentation based on global-local dual-branch feature fusion.- PRFNet: Progressive Region Focusing Network for Polyp Segmentation.- Pyramid Shape-aware Semi-supervised Learning for Thyroid Nodules Segmentation in Ultrasound Images.- AgileNet: A Rapid and Efficient Breast Lesion Segmentation Method for Medical Image Analysis.- CFNet: A Coarse-to-Fine Framework for Coronary Artery Segmentation.- Edge-Prior Contrastive Transformer for Optic Cup and Optic Disc Segmentation.- BGBF-Net: Boundary-Guided Buffer Feedback Network for Liver Tumor Segmentation.- MixU-Net: Hybrid CNN-MLP Networks for Urinary Collecting System Segmentation.- Self-Supervised Cascade Training for Monocular Endoscopic Dense Depth Recovery.- Pseudo-label Clustering-driven Dual-level Contrast Learning based Source-free Domain Adaptation for Fundus Image Segmentation.- Patch Shuffle and Pixel Contrast: Dual Consistency Learning for Semi-supervised Lung Tumor Segmentation.- Multi-Source Information Fusion for Depression Detection



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.