Leonardis / Ricci / Roth | Computer Vision – ECCV 2024 | E-Book | sack.de
E-Book

E-Book, Englisch, 486 Seiten

Reihe: Lecture Notes in Computer Science

Leonardis / Ricci / Roth Computer Vision – ECCV 2024

18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part LXV
Erscheinungsjahr 2024
ISBN: 978-3-031-73650-6
Verlag: Springer International Publishing
Format: PDF
Kopierschutz: 1 - PDF Watermark

18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part LXV

E-Book, Englisch, 486 Seiten

Reihe: Lecture Notes in Computer Science

ISBN: 978-3-031-73650-6
Verlag: Springer International Publishing
Format: PDF
Kopierschutz: 1 - PDF Watermark



The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29–October 4, 2024.

The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. They deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.

Leonardis / Ricci / Roth Computer Vision – ECCV 2024 jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


MRSP: Learn Multi-Representations of Single Primitive for Compositional Zero-Shot Learning.- Cross-Domain Semantic Segmentation on Inconsistent Taxonomy using VLMs.- TrafficNight : An Aerial Multimodal Benchmark For Nighttime Vehicle Surveillance.- Loc3Diff: Local Diffusion for 3D Human Head Synthesis and Editing.- Towards Open Domain Text-Driven Synthesis of Multi-Person Motions.- Generative End-to-End Autonomous Driving.- Learning to Distinguish Samples for Generalized Category Discovery.- COM Kitchens: An Unedited Overhead-view Procedural Videos Dataset a Vision-Language Benchmark.- PILoRA: Prototype Guided Incremental LoRA for Federated Class-Incremental Learning.- Diff-Reg: Diffusion Model in Doubly Stochastic Matrix Space for Registration Problem.- WBP: Training-time Backdoor Attacks through Hardware-based Weight Bit Poisoning.- Towards Dual Transparent Liquid Level Estimation in Biomedical Lab: Dataset, Methods and Practice.- Encapsulating Knowledge in One Prompt.- Cross-Input Certified Training for Universal Perturbations.- Visual Relationship Transformation.- Not Just Change the Labels, Learn the Features: Watermarking Deep Neural Networks with Multi-View Data.- Delving into Adversarial Robustness on Document Tampering Localization.- Adaptive Selection of Sampling-Reconstruction in Fourier Compressed Sensing.- Confidence-Based Iterative Generation for Real-World Image Super-Resolution.- Learning Scalable Model Soup on a Single GPU: An Efficient Subspace Training Strategy.- Correspondences of the Third Kind: Camera Pose Estimation from Object Reflection.- Seeing Faces in Things: A Model and Dataset for Pareidolia.- Cocktail Universal Adversarial Attack on Deep Neural Networks.- Gaussian Frosting: Editable Complex Radiance Fields with Real-Time Rendering.- AMD: Automatic Multi-step Distillation of Large-scale Vision Models.- FairViT: Fair Vision Transformer via Adaptive Masking.- TrojVLM: Backdoor Attack Against Vision Language Models.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.