Ishikawa / Shi / Liu | Computer Vision ¿ ACCV 2020 | Buch | 978-3-030-69537-8 | sack.de

Buch, Englisch, Band 12625, 715 Seiten, Paperback, Format (B × H): 155 mm x 235 mm, Gewicht: 1095 g

Reihe: Image Processing, Computer Vision, Pattern Recognition, and Graphics

Ishikawa / Shi / Liu

Computer Vision ¿ ACCV 2020

15th Asian Conference on Computer Vision, Kyoto, Japan, November 30 ¿ December 4, 2020, Revised Selected Papers, Part IV

Buch, Englisch, Band 12625, 715 Seiten, Paperback, Format (B × H): 155 mm x 235 mm, Gewicht: 1095 g

Reihe: Image Processing, Computer Vision, Pattern Recognition, and Graphics

ISBN: 978-3-030-69537-8
Verlag: Springer International Publishing


The six volume set of LNCS 12622-12627 constitutes the proceedings of the 15th Asian Conference on Computer Vision, ACCV 2020, held in Kyoto, Japan, in November/ December 2020.*
The total of 254 contributions was carefully reviewed and selected from 768 submissions during two rounds of reviewing and improvement. The papers focus on the following topics:

Part I: 3D computer vision; segmentation and grouping

Part II: low-level vision, image processing; motion and tracking

Part III: recognition and detection; optimization, statistical methods, and learning; robot vision

Part IV: deep learning for computer vision, generative models for computer vision

Part V: face, pose, action, and gesture; video analysis and event recognition; biomedical image analysis

Part VI: applications of computer vision; vision for X; datasets and performance analysis

*The conference was held virtually.
Ishikawa / Shi / Liu Computer Vision ¿ ACCV 2020 jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Deep Learning for Computer Vision.- In-sample Contrastive Learning and Consistent Attention for Weakly Supervised Object Localization.- Exploiting Transferable Knowledge for Fairness-aware Image Classification.- Introspective Learning by Distilling Knowledge from Online Self-explanation.- Hyperparameter-Free Out-of-Distribution Detection Using Cosine Similarity.- Meta-Learning with Context-Agnostic Initialisations.- Second Order enhanced Multi-glimpse Attention in Visual Question Answering.- Localize to Classify and Classify to Localize: Mutual Guidance in Object Detection.- Unified Density-Aware Image Dehazing and Object Detection in Real-World Hazy Scenes.- Part-aware Attention Network for Person Re-Identification.- Image Captioning through Image Transformer.- Feature Variance Ratio-Guided Channel Pruning for Deep Convolutional Network Acceleration.- Learn more, forget less: Cues from human brain.- Knowledge Transfer Graph for Deep Collaborative Learning.- Regularizing Meta-Learning via Gradient Dropout.- Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks.- Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed.- Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation.- Double Targeted Universal Adversarial Perturbations.- Adversarially Robust Deep Image Super-Resolution using Entropy Regularization.- Online Knowledge Distillation via Multi-branch Diversity Enhancement.- Rotation Equivariant Orientation Estimation for Omnidirectional Localization.- Contextual Semantic Interpretability.- Few-Shot Object Detection by Second-order Pooling.- Depth-Adapted CNN for RGB-D cameras.- Generative Models for Computer Vision.- Over-exposure Correction via Exposure and Scene Information Disentanglement.- Novel-View Human Action Synthesis.- Augmentation Network for Generalised Zero-Shot Learning.- Local Facial Makeup Transfer via Disentangled Representation.- OpenGAN: Open Set Generative Adversarial Networks.- CPTNet: Cascade Pose Transform Network for Single Image Talking Head Animation.- TinyGAN: Distilling BigGAN for Conditional Image Generation.- A cost-effective method for improving and re-purposing large, pre-trained GANs by fine-tuning their class-embeddings.- RF-GAN: A Light and Reconfigurable Network for Unpaired Image-to-Image Translation.- GAN-based Noise Model for Denoising Real Images.- Emotional Landscape Image Generation Using Generative Adversarial Networks.- Feedback Recurrent Autoencoder for Video Compression.- MatchGAN: A Self-Supervised Semi-Supervised Conditional Generative Adversarial Network.- DeepSEE: Deep Disentangled Semantic Explorative Extreme Super-Resolution.- dpVAEs: Fixing Sample Generation for Regularized VAEs.- MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network.- EvolGAN: Evolutionary Generative Adversarial Networks.- Sequential View Synthesis with Transformer.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.