Leonardis / Ricci / Roth | Computer Vision – ECCV 2024 | E-Book | sack.de
E-Book

E-Book, Englisch, 507 Seiten

Reihe: Lecture Notes in Computer Science

Leonardis / Ricci / Roth Computer Vision – ECCV 2024

18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part LXI
Erscheinungsjahr 2024
ISBN: 978-3-031-73030-6
Verlag: Springer International Publishing
Format: PDF
Kopierschutz: 1 - PDF Watermark

18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part LXI

E-Book, Englisch, 507 Seiten

Reihe: Lecture Notes in Computer Science

ISBN: 978-3-031-73030-6
Verlag: Springer International Publishing
Format: PDF
Kopierschutz: 1 - PDF Watermark



The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29–October 4, 2024.

The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. They deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.

Leonardis / Ricci / Roth Computer Vision – ECCV 2024 jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Semantic Residual Prompts for Continual Learning.- TransCAD: A Hierarchical Transformer for CAD Sequence Inference from Point Clouds.- ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward Modeling.- Mixture of Efficient Diffusion Experts Through Automatic Interval and Sub-Network Selection.- Occupancy as Set of Points.- UAV First-Person Viewers Are Radiance Field Learners.- Rethinking Few-shot Class-incremental Learning: Learning from Yourself.- ProSub: Probabilistic Open-Set Semi-Supervised Learning with Subspace-Based Out-of-Distribution Detection.- A Fair Ranking and New Model for Panoptic Scene Graph Generation.- Pick-a-back: Selective Device-to-Device Knowledge Transfer in Federated Continual Learning.- Compensation Sampling for Improved Convergence in Diffusion Models.- Situated Instruction Following.- Holodepth: Programmable Depth-Varying Projection via Computer-Generated Holography.- SceneScript: Reconstructing Scenes With An Autoregressive Structured Language Model.- GalLop: Learning global and local prompts for vision-language models.- Depth on Demand: Streaming Dense Depth from a Low Frame Rate Active Sensor.- Lossy Image Compression with Foundation Diffusion Models.- CLIP-DINOiser: Teaching CLIP a few DINO tricks for open-vocabulary semantic segmentation.- FMBoost: Boosting Latent Diffusion with Flow Matching.- COMPOSE: Comprehensive Portrait Shadow Editing.- LNL+K: Enhancing Learning with Noisy Labels Through Noise Source Knowledge Integration.- Diffusion Models as Data Mining Tools.- Graph Neural Network Causal Explanation via Neural Causal Models.- Unsupervised, Online and On-The-Fly Anomaly Detection For Non-Stationary Image Distributions.- Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering.- GAReT: Cross-view Video Geolocalization with Adapters and Auto-Regressive Transformers.- SAMFusion: Sensor-Adaptive Multimodal Fusion for 3D Object Detection in Adverse Weather.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.