Buch, Englisch, 424 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 721 g
Milan, Italy, September 29-October 4, 2024, Proceedings, Part XV
Buch, Englisch, 424 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 721 g
Reihe: Lecture Notes in Computer Science
ISBN: 978-3-031-91580-2
Verlag: Springer Nature Switzerland
The multi-volume set LNCS 15623 until LNCS 15646 constitutes the proceedings of the workshops that were held in conjunction with the 18th European Conference on Computer Vision, ECCV 2024, which took place in Milan, Italy, during September 29–October 4, 2024.
These LNCS volumes contain 574 accepted papers from 53 of the 73 workshops. The list of workshops and distribution of the workshop papers in the LNCS volumes can be found in the preface that is freely accessible online.
Zielgruppe
Research
Autoren/Hrsg.
Fachgebiete
- Mathematik | Informatik EDV | Informatik Informatik Künstliche Intelligenz Maschinelles Lernen
- Technische Wissenschaften Elektronik | Nachrichtentechnik Elektronik
- Mathematik | Informatik EDV | Informatik Informatik Mensch-Maschine-Interaktion
- Mathematik | Informatik EDV | Informatik Informatik Bildsignalverarbeitung
- Mathematik | Informatik EDV | Informatik Informatik Künstliche Intelligenz Wissensbasierte Systeme, Expertensysteme
Weitere Infos & Material
.- MMA-MRNNet: Harnessing Multiple Models of Affect and Dynamic Masked RNN for Precise Facial Expression Intensity Estimation.
.- ToddlerAct: A Toddler Action Recognition Dataset for Gross Motor Development Assessment.
.- 7th abaw competition: Multi-task learning and compound expression recognition.
.- Are Visual-Language Models Effective in Action Recognition? A Comparative Study.
.- Textualized and Feature-based Models for Compound Multimodal Emotion Recognition in the Wild.
.- Gr-IoU: Ground-Intersection over Union for Robust Multi-Object Tracking with 3D Geometric Constraints.
.- Single Image 3D Human Pose Estimation Using Sequential Joint Group Generation.
.- MVP: Multimodal Emotion Recognition based on Video and Physiological Signals.
.- Facial Expression-Enhanced TTS: Combining Face Representation and Emotion Intensity for Adaptive Speech.
.- Massively Multi-Person 3D Human Motion Forecasting with Scene Context.
.- TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans.
.- Improving face generation quality and prompt following with synthetic captions.
.- Tracking Virtual Meetings in the Wild: Re-identification in MultiParticipant Virtual Meetings.
.- rPPG-SysDiaGAN: Systolic-Diastolic Feature Localization in rPPG Using Generative Adversarial Network with Multi-Domain Discriminator.
.- MST-KD: Multiple Specialized Teachers Knowledge Distillation for Fair Face Recognition.
.- Predicting Emotions in Interpersonal Interaction Videos: I Know What You Feel.
.- Multi-Task Affective Behaviour Analysis based on MT-EmotiNet Models.
.- Smoothing Predictions of Multi-Task EmotiNet Models for Compound Facial Expression Recognition.
.- ABAW7 Challenge: A Facial Affect Recognition Approach based on Transformer Encoder and Multilayer Perceptron.
.- Compound Expression Recognition via Curriculum Learning.
.- Boundary Matching and Refinement Network with Cross-modal Contrastive Learning for Temporal Moment Localization.
.- Enhancing Facial Expression Recognition through Dual-Direction Attention Mixed Feature Networks: Application to 7th ABAW Challenge.
.- Introducing Gating and Context into Temporal Action Detection.
.- Better Spanish Emotion Recognition In-the-wild: Bringing Attention to Deep Spectrum Voice Analysis.
.- Monitoring Viewer Attention During Online Ads.
.- Affective Behaviour Analysis via Progressive Learning.
.- Are We Friends? End-to-End Prediction of Child Rapport in Guided Play.
.- Affective Behavior Analysis using Task-adaptive and AU-assisted Graph.
.- Ig3D: Integrating 3D Face Representations in Facial Expression Inference.