Zhou / Liu / Gao | Large Vision-Language Models | Buch | 978-3-031-94968-5 | www.sack.de

Buch, Englisch, 429 Seiten, Format (B × H): 160 mm x 241 mm, Gewicht: 832 g

Reihe: Advances in Computer Vision and Pattern Recognition

Zhou / Liu / Gao

Large Vision-Language Models

Pre-training, Prompting, and Applications
Erscheinungsjahr 2025
ISBN: 978-3-031-94968-5
Verlag: Springer

Pre-training, Prompting, and Applications

Buch, Englisch, 429 Seiten, Format (B × H): 160 mm x 241 mm, Gewicht: 832 g

Reihe: Advances in Computer Vision and Pattern Recognition

ISBN: 978-3-031-94968-5
Verlag: Springer


The rapid progress in the field of large multimodal foundation models, especially vision-language models, has dramatically transformed the landscape of machine learning, computer vision, and natural language processing. These powerful models, trained on vast amounts of multimodal data mixed with images and text, have demonstrated remarkable capabilities in tasks ranging from image classification and object detection to visual content generation and question answering. This book provides a comprehensive and up-to-date exploration of large vision-language models, covering the key aspects of their pre-training, prompting techniques, and diverse real-world computer vision applications. It is an essential resource for researchers, practitioners, and students in the fields of computer vision, natural language processing, and artificial intelligence.

begins by exploring the fundamentals of large vision-language models, covering architectural designs, training techniques, and dataset construction methods. It then examines prompting strategies and other adaptation methods, demonstrating how these models can be effectively fine-tuned to address a wide range of downstream tasks. The final section focuses on the application of vision-language models across various domains, including open-vocabulary object detection, 3D point cloud processing, and text-driven visual content generation and manipulation.

Beyond the technical foundations, the book explores the wide-ranging applications of vision-language models (VLMs), from enhancing image recognition systems to enabling sophisticated visual content generation and facilitating more natural human-machine interactions. It also addresses key challenges in the field, such as feature alignment, scalability, data requirements, and evaluation metrics. By providing a comprehensive roadmap for both newcomers and experts, this book serves as a valuable resource for understanding the current landscape, limitations, and future directions of VLMs, ultimately contributing to the advancement of artificial intelligence.

Zhou / Liu / Gao Large Vision-Language Models jetzt bestellen!

Zielgruppe


Upper undergraduate

Weitere Infos & Material


Part 1: Pre-training and Datasets.- Chapter 1: LAION-5B: A Massive Open Image-Text Dataset.- Chapter 2: Efficient Training of Large-Scale Vision-Language Models.- Chapter 3: Scaling Laws for Contrastive Language-Image Learning.- Chapter 4: Scaling Up Vision-Language Models for Generic Tasks.- Chapter 5: Searching for Next-Gen Multimodal Datasets.- Part 2: Prompting and Generalization.- Chapter 6: Soft Prompt Learning for Vision-Language Models.- Chapter 7: Unified Prompting for Vision and Language.- Chapter 8: Zero-Shot Image Classification with Custom Prompts.- Chapter 9: Enhancing Vision-Language Models with Feature Adapters.- Chapter 10: Automatic Optimization of Prompting Architectures.- Chapter 11: Open-Vocabulary Calibration for VL Models.- Part 3: Applications.- Chapter 12: Open-Vocabulary DETR with Conditional Matching.- Chapter 13: Extracting Dense Labels from CLIP.- Chapter 14: PointCLIP: Understanding Point Clouds with VL.- Chapter 15: Diffusion-Based Relation Inversion from Images.- Chapter 16: Text-to-Video Generation.- Chapter 17: Text-Driven Human Motion Generation.- Chapter 18: Zero-Shot Text-Driven 3D Avatar Generation.- Chapter 19: Zero-Shot Text-Driven HDR Panorama Generation.


Kaiyang Zhou is an Assistant Professor at the Department of Computer Science, Hong Kong Baptist University, working on computer vision and machine learning. He has published more than 30 technical papers in top-tier journals and conferences in relevant fields, including CVPR, ICCV, ECCV, NeurlPS, ICLR, ICML, AAAI, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), and International Journal of Computer Vision (IJCV), with over 10,000 citations received in total. He is an Associate Editor of IJCV, the flagship journal in computer vision, and regularly serves as area chair and senior program committee for top-tier computer vision and machine learning conferences, such as NeurIPS, CVPR, ECCV, and AAAI.

Ziwei Liu is an Associate Professor at Nanyang Technological University, Singapore. His research interests include computer vision, machine learning, and computer graphics. He has published extensively with top-tier conferences and journals in relevant fields, including CVPR, ICCV, ECCV, NeurlPS, ICLR, ICML, IEEE Transactions on Pattern Analysis and Machine Intelligence, ACM Transactions on Graphics and Nature - Machine Intelligence. He is the recipient of ICCV Young Researcher Award, HKSTP Best Paper Award, CVPR Best Paper Award Candidate, ICBS Frontiers of Science Award and MIT Technology Review Innovators under 35 Asia Pacific. He serves as an area chair of CVPR, ICCV, ECCV, NeurlPS and ICLR, as well as an associate editor of International Journal of Computer Vision. 

Peng Gao is a research scientist at Shanghai Artificial Intelligence Laboratory, working on large language models and vision-language models. His research interests include vision-language models, large language models and diffusion models for contents creation. He has published more than 40 papers in top-tier journals and conferences, including International Journal of Computer Vision (IJCV), ICML, ICLR, NeurIPS, CVPR, ICCV and ECCV, receiving more than 10,000 citations. He has led several influential open-source projects including LLaMa-Adapter and the Lumina series, receiving more than 7000 and 2000 stars, respectively. 



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.