Del Bue / Tommasi / Canton | Computer Vision - ECCV 2024 Workshops | Buch | 978-3-031-92386-9 | sack.de

Buch, Englisch, 466 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 785 g

Reihe: Lecture Notes in Computer Science

Del Bue / Tommasi / Canton

Computer Vision - ECCV 2024 Workshops

Milan, Italy, September 29-October 4, 2024, Proceedings, Part II
Erscheinungsjahr 2025
ISBN: 978-3-031-92386-9
Verlag: Springer Nature Switzerland

Milan, Italy, September 29-October 4, 2024, Proceedings, Part II

Buch, Englisch, 466 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 785 g

Reihe: Lecture Notes in Computer Science

ISBN: 978-3-031-92386-9
Verlag: Springer Nature Switzerland


The multi-volume set LNCS 15623 until LNCS 15646 constitutes the proceedings of the workshops that were held in conjunction with the 18th European Conference on Computer Vision, ECCV 2024, which took place in Milan, Italy, during September 29–October 4, 2024. 

These LNCS volumes contain 574 accepted papers from 53 of the 73 workshops. The list of workshops and distribution of the workshop papers in the LNCS volumes can be found in the preface that is freely accessible online.

Del Bue / Tommasi / Canton Computer Vision - ECCV 2024 Workshops jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Generating Binary Species Range Maps.- WildFusion: Individual Animal Identification with Calibrated Similarity Fusion.- Towards Zero-Shot Camera Trap Image Categorization.- Larval Hostplant Prediction from Luehdorfia japonica Image using Multi-label ABN.- Depth Any Canopy: Leveraging Depth Foundation Models for Canopy Height Estimation.- Underwater Uncertainty: A Multi-Annotator Image Dataset for Benthic Habitat Classification.- Deep Learning for Automated Shark Detection and Biometrics Without Keypoints.- Improving in situ real-time classification of long-tail marine plankton images for ecosystem studies.- KAN-Mixer: Kolmogorov-Arnold Networks for Gene Expression Prediction in Plant Species.- Multi-Scale and Multimodal Species Distribution Modelling.- Semantic Segmentation of Benthic Classes in Reef Environments using a Large Vision Transformer.- POLO - Point-based, multi-class animal detection.- Mining Field Data for Tree Species Recognition at Scale.- MaskSDM: Adaptive species distribution modeling through data masking.- Fine-tuning for Bird Sound Classification: An Empirical Study.- Multimodal Fusion Strategies for Mapping Biophysical Landscape Features.- I-Design: Personalized LLM Interior Designer.- NimbleD: Enhancing Self-supervised Monocular Depth Estimation with Pseudo-labels and Large-scale Video Pre-training.- GeoTransfer : Generalizable Few-Shot Multi-View Reconstruction via Transfer Learning.- DiVR: incorporating context from diverse VR scenes for human trajectory prediction.- Skeleton-Aware Motion Retargeting Using Masked Pose Modeling.- LucidDreaming: Controllable Object-Centric 3D Generation.- BehAVE: Behaviour Alignment of Video Game Encodings.- Collaborative Control for Geometry-Conditioned PBR Image Generation.- Real-Time Neural Cloth Deformation using a Compact Latent Space and a Latent Vector Predictor.- Level Up Your Tutorials: VLMs for Game Tutorials Quality Assessment.- Across-Game Engagement Modelling via Few-Shot Learning.- Hand2Any: Hand-to-Any Motion Mapping with Few-Shot User Adaptation for Avatar Manipulation.- SkelFormer: Markerless 3D Pose and Shape Estimation using Skeletal Transformers.- PlaMo: Plan and Move in Rich 3D Physical Environments.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.