Yoon / Kim / Han | MPEG-V | E-Book | sack.de
E-Book

E-Book, Englisch, 210 Seiten

Yoon / Kim / Han MPEG-V

Bridging the Virtual and Real World
1. Auflage 2015
ISBN: 978-0-12-420203-0
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark

Bridging the Virtual and Real World

E-Book, Englisch, 210 Seiten

ISBN: 978-0-12-420203-0
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark



This book is the first to cover the recently developed MPEG-V standard, explaining the fundamentals of each part of the technology and exploring potential applications. Written by experts in the field who were instrumental in the development of the standard, this book goes beyond the scope of the official standard documentation, describing how to use the technology in a practical context and how to combine it with other information such as audio, video, images, and text. Each chapter follows an easy-to-understand format, first examining how each part of the standard is composed, then covers intended uses and applications for each particular effect. With this book, you will learn how to: - Use the MPEG-V standard to develop applications - Develop systems for various use cases using MPEG-V - Synchronize the virtual world and real world - Create and render sensory effects for media - Understand and use MPEG-V for the research of new types of media related technology and services - The first book on the new MPEG-V standard, which enables interoperability between virtual worlds and the real world - Provides the technical foundations for understanding and using MPEG-V for various virtual world, mirrored world, and mixed world use cases - Accompanying website features schema files for the standard, with example XML files, source code from the reference software and example applications

Kyoungro Yoon is a professor in School of Computer Science and Engineering at Konkuk University, Seoul, Korea. He received Ph.D. degree in computer and information science in 1999 from Syracuse University, USA. From 1999 to 2003, he was a Chief Research Engineer and Group Leader in charge of development of various product related technologies and standards in the field of image and audio processing at the LG Electronics Institute of Technology. In 2003, he joined Konkuk University as an assistant professor and has been a professor since 2012. He actively participated in the development of standards such as MPEG-7, MPEG-21, MPEG-V, JPSearch, and TV-Anytime and served as a co-chair for Ad Hoc Groups on User Preferences, chair for Ad Hoc Group on MPEG Query Format, chair for Ad Hoc Group on MPEG-V, chair for Ad Hoc Group on JPSearch and chair for the Metadata Subgroup of ISO/IEC JTC1 SC29 WG1 (a.k.a. JPEG). He also served as an editor of various international standards such as ISO/IEC 15938-12, ISO/IEC 23005-2/5/6, and ISO/IEC 24800-2/5. He has co-authored over 40 conference and journal publications in the field of multimedia information systems. He is also a inventor/co-inventor of more than 30 US Patents and 70 Korean Patents.

Yoon / Kim / Han MPEG-V jetzt bestellen!

Weitere Infos & Material


1;Front Cover;1
2;MPEG-V;4
3;Copyright Page;5
4;Contents;6
5;Acknowledgment;8
6;Author Biographies;10
7;Preface;14
8;1 Introduction to MPEG-V Standards;16
8.1;1.1 Introduction to Virtual Worlds;16
8.2;1.2 Advances in Multiple Sensorial Media;18
8.2.1;1.2.1 Basic Studies on Multiple Sensorial Media;18
8.2.2;1.2.2 Authoring of MulSeMedia;19
8.2.3;1.2.3 Quality of Experience of MulSeMedia;22
8.2.3.1;1.2.3.1 Test Setups;23
8.2.3.2;1.2.3.2 Test Procedures;23
8.2.3.3;1.2.3.3 Experimental QoE Results for Sensorial Effects;25
8.3;1.3 History of MPEG-V;26
8.4;1.4 Organizations of MPEG-V;29
8.5;1.5 Conclusion;32
8.6;References;33
9;2 Adding Sensorial Effects to Media Content;36
9.1;2.1 Introduction;36
9.2;2.2 Sensory Effect Description Language;39
9.2.1;2.2.1 SEDL Structure;39
9.2.2;2.2.2 Base Data Types and Elements of SEDL;40
9.2.3;2.2.3 Root Element of SEDL;42
9.2.4;2.2.4 Description Metadata;45
9.2.5;2.2.5 Declarations;46
9.2.6;2.2.6 Group of Effects;47
9.2.7;2.2.7 Effect;48
9.2.8;2.2.8 Reference Effect;49
9.2.9;2.2.9 Parameters;50
9.3;2.3 Sensory Effect Vocabulary: Data Formats for Creating SEs;51
9.4;2.4 Creating SEs;64
9.5;2.5 Conclusion;71
9.6;References;71
10;3 Standard Interfacing Format for Actuators and Sensors;72
10.1;3.1 Introduction;72
10.2;3.2 Interaction Information Description Language;72
10.2.1;3.2.1 IIDL Structure;72
10.2.2;3.2.2 DeviceCommand Element;73
10.2.3;3.2.3 SensedInfo Element;74
10.2.4;3.2.4 InteractionInfo Element;77
10.3;3.3 DCV: Data Format for Creating Effects Using Actuators;80
10.4;3.4 SIV: Data Format for Sensing Information Using Sensors;88
10.5;3.5 Creating Commands and Accepting Sensor Inputs;98
10.6;3.6 Conclusion;102
10.7;References;102
11;4 Adapting Sensory Effects and Adapted Control of Devices;104
11.1;4.1 Introduction;104
11.2;4.2 Control Information Description Language;105
11.2.1;4.2.1 CIDL Structure;105
11.2.2;4.2.2 SensoryDeviceCapability Element;106
11.2.3;4.2.3 SensorDeviceCapability Element;107
11.2.4;4.2.4 USPreference Element;110
11.2.5;4.2.5 SAPreference Element;112
11.3;4.3 Device Capability Description Vocabulary;114
11.4;4.4 Sensor Capability Description Vocabulary;125
11.5;4.5 User’s Sensory Effect Preference Vocabulary;133
11.6;4.6 Sensor Adaptation Preference Vocabulary;141
11.7;4.7 Conclusion;143
11.8;References;144
12;5 Interoperable Virtual World;146
12.1;5.1 Introduction;146
12.2;5.2 Virtual-World Object Metadata;148
12.2.1;5.2.1 Introduction;148
12.2.2;5.2.2 Sound and Scent Types;148
12.2.3;5.2.3 Control Type;149
12.2.4;5.2.4 Event Type;150
12.2.5;5.2.5 Behavior Model Type;151
12.2.6;5.2.6 Identification Type;151
12.3;5.3 Avatar Metadata;153
12.3.1;5.3.1 Introduction;153
12.3.2;5.3.2 Appearance Type;153
12.3.3;5.3.3 Animation Type;154
12.3.4;5.3.4 Communication Skills Type;159
12.3.5;5.3.5 Personality Type;160
12.3.6;5.3.6 Motion Control Type;161
12.3.7;5.3.7 Haptic Property Type;163
12.4;5.4 Virtual Object Metadata;164
12.4.1;5.4.1 Introduction;164
12.4.2;5.4.2 Appearance Type;165
12.4.3;5.4.3 Animation Type;165
12.4.4;5.4.4 Virtual-Object Components;168
12.5;5.5 Conclusion;168
12.6;References;168
13;6 Common Tools for MPEG-V and MPEG-V Reference SW with Conformance;170
13.1;6.1 Introduction;170
13.2;6.2 Common Types and Tools;171
13.2.1;6.2.1 Mnemonics for Binary Representations;171
13.2.2;6.2.2 Common Header for Binary Representations;173
13.2.3;6.2.3 Basic Data and Other Common Types;173
13.3;6.3 Classification Schemes;175
13.4;6.4 Binary Representations;176
13.5;6.5 Reference Software;178
13.5.1;6.5.1 Reference Software Based on JAXB;178
13.5.2;6.5.2 Reference Software for Binary Representation;181
13.6;6.6 Conformance Test;182
13.7;6.7 Conclusion;183
13.8;References;183
14;7 Applications of MPEG-V Standard;186
14.1;7.1 Introduction;186
14.2;7.2 Information Adaptation from VW to RW;186
14.2.1;7.2.1 System Architecture;186
14.2.2;7.2.2 Instantiation A: 4D Broadcasting/Theater;188
14.2.3;7.2.3 Instantiation B: Haptic Interaction;189
14.3;7.3 Information Adaptation From the RW into a VW;191
14.3.1;7.3.1 System Architecture;191
14.3.2;7.3.2 Instantiation C: Full Motion Control and Navigation of Avatar or Object With Multi-Input Sources;192
14.3.3;7.3.3 Instantiation D: Facial Expressions and Body Gestures;194
14.3.4;7.3.4 Instantiation E: Seamless Interaction Between RW and VWs;196
14.4;7.4 Information Exchange Between VWs;199
14.4.1;7.4.1 System Architecture;199
14.4.2;7.4.2 Instantiation F: Interoperable VW;200
14.5;References;202
15;Terms, Definitions, and Abbreviated Terms;204
16;Index;206


Chapter 2 Adding Sensorial Effects to Media Content
The provision of sensory effects in addition to audiovisual media content has recently gained attention because more sensorial stimulation supports more immersion on user experiences. For the successful industrial deployment of multiple sensorial media (MulSeMedia), it is important to provide an easy and efficient means of producing MulSeMedia content. In other words, the standard descriptions of sensorial effects (SEs) are one of the key success factors of the MuLSeMedia industry. In this chapter, the standard syntax and semantics from MPEG-V, Part 3 to describe such SEs are introduced along with their valid instances. Keywords
Sensorial effects; sensorial effect rendering; sensory effect metadata Contents 2.1 Introduction 21 2.2 Sensory Effect Description Language 24 2.2.1 SEDL Structure 24 2.2.2 Base Data Types and Elements of SEDL 25 2.2.3 Root Element of SEDL 27 2.2.4 Description Metadata 30 2.2.5 Declarations 31 2.2.6 Group of Effects 32 2.2.7 Effect 33 2.2.8 Reference Effect 34 2.2.9 Parameters 35 2.3 Sensory Effect Vocabulary: Data Formats for Creating SEs 36 2.4 Creating SEs 49 2.5 Conclusion 56 References 56 2.1 Introduction
MPEG-V, Part 3: Sensory information (ISO/IEC 23005-3), specifies the Sensory Effect Description Language (SEDL) [1] as an XML schema-based language that enables one to describe sensorial effects (SEs) such as light, wind, fog, and vibration that trigger human senses. The actual SEs are not part of the SEDL but are defined within the Sensory Effect Vocabulary (SEV) for extensibility and flexibility, allowing each application domain to define its own SEs. A description conforming to SEDL is referred to as Sensory Effect Metadata (SEM) and may be associated with any type of multimedia content (e.g., movies, music, Web sites, games). The SEM is used to steer actuators such as fans, vibration chairs, and lamps using an appropriate mediation device to increase the user experience. That is, in addition to the audiovisual (AV) content of a movie, e.g., the user will also perceive other effects such as those described above, giving the user the sensation of being part of the particular media content, which will result in a worthwhile, informative user experience. The concept of receiving SEs in addition to AV content is depicted in Figure 2.1.
Figure 2.1 Concept of MPEG-V SEDL [1]. The media and corresponding SEM may be obtained from a Digital Versatile Disc (DVD), Blu-ray Disc (BD), or any type of online service (i.e., download/play or streaming). The media processing engine, which is also referred to as the adaptation engine, acts as the mediation device and is responsible for playing the actual media content resource and accompanied SEs in a synchronized way based on the user’s setup in terms of both the media content and rendering of the SE. Therefore, the media processing engine may adapt both the media resource and the SEM according to the capabilities of the various rendering devices. The SEV defines a clear set of actual SEs to be used with the SEDL in an extensible and flexible way. That is, it can be easily extended with new effects or through a derivation of existing effects thanks to the extensibility feature of the XML schema. Furthermore, the effects are defined based on the authors’ (i.e., creators of the SEM) intention independent from the end user’s device setting, as shown in Figure 2.2.
Figure 2.2 Mapping of author’s intentions to SE data and actuator capabilities (ACs) [2]. The sensory effect metadata elements or data types are mapped to commands that control the actuators based on their capabilities. This mapping is usually provided by the Virtual-to-Real adaptation engine and was deliberately not defined in this standard, i.e., it is left open for industry competitors. It is important to note that there is not necessarily a one-to-one mapping between elements or data types of the SE data and ACs. For example, the effect of hot/cold wind may be rendered on a single device with two capabilities, i.e., a heater or air conditioner, and a fan or ventilator. As shown in Figure 2.3, the SEs can be adjusted into adapted SEs (i.e., defined in MPEG-V, Part 5, as device commands) in accordance with the capabilities of the actuators (ACs, defined in MPEG-V, Part 2) and actuation preferences (APs, defined in MPEG-V, Part 2, as user sensory preferences).
Figure 2.3 The adapted SEs (actuator commands defined in MPEG-V, Part 5) generated by combining SEs with ACs and user’s APs. Figure 2.4 shows an example of combining SEs (SEs in MPEG-V, Part 3) with sensed information (SI in MPEG-V, Part 5) to generate adapted actuator commands (ACmd in MPEG-V, Part 5). For example, the SE corresponding to the scene might be cooling the temperature to 5°C and adding a wind effect with 100% intensity. Assume instead that the current room temperature is 12°C. It would be unwise to deploy the cooling and wind effect as described in the SE data because the current temperature inside the room is already low, and users may feel uncomfortable with the generated SEs. Therefore, a sensor measures the room temperature and the adaptation engine generates the adapted SEs (i.e., ACmds), which are a reduced wind effect (20% intensity) and a heating effect (20°C), for instance.
Figure 2.4 The adapted SEs (actuator commands defined in MPEG-V, Part 5) generated by combining SEs with SI. This chapter is organized as follows. Section 2.2 describes the details of the SEDL. Section 2.3 presents the SEV, which specifies the data formats used for creating SEs. Section 2.4 presents XML instances using SEDL and SEV. Finally, Section 2.5 concludes the chapter. 2.2 Sensory Effect Description Language
2.2.1 SEDL Structure
The SEDL is a language providing basic building blocks to instantiate sensory effect metadata defined by the MPEG-V standard based on XML that can be authored by content providers. 2.2.2 Base Data Types and Elements of SEDL
There are two base types in the SEDL. The first base type is SEMBaseAttributes, which includes six base attributes and one base attribute Group. The schema definition of SEMBaseAttributes is shown in Table 2.1. The activate attribute describes whether the SE shall be activated. Table 2.1 Schema definition of SEMBaseAttributes The duration attribute describes the duration of any SE rendering. The fade attribute describes the fade time within which the defined intensity is reached. The alt attribute describes an alternative effect identified by the uniform resource identifier (URI). For example, an alternative effect is chosen because the original intended effect cannot be rendered owing to a lack of devices supporting this effect. The priority attribute describes the priority for effects with respect to other effects in the same group of effects sharing the same point in time when they should become available for consumption. A value of 1 indicates the highest priority, and larger values indicate lower priorities. The location attribute describes the location from where the effect is expected to be received from the user’s perspective according to the X, Y, and Z axes, as depicted in Figure 2.5. A classification scheme that may be used for this purpose is LocationCS, as defined in Annex A of ISO/IEC 23005-6. For example, urn:mpeg:mpeg-v:01-SI-LocationCS-NS:left:*:midway defines the location as follows: left on the X-axis, any location on the Y-axis, and midway on the Z-axis. That is, it describes all effects on the left-midway side of the user. The SEMAdaptabilityAttributes contains two attributes related to the adaptability of the SEs. The adaptType attribute describes the preferred type of adaptation using the following possible instantiations: strict, i.e., an adaptation by approximation may not be performed, i.e., an adaptation by approximation may be performed with a smaller effect value than the specified effect value, i.e., an adaptation by approximation may be performed with a greater effect value than the specified effect value, and i.e., an adaptation by approximation may be performed between the upper and lower bounds specified by adaptRange. The adaptRange attribute describes the upper and lower bounds in terms of percentage for adaptType.
Figure 2.5 Location model for SEs and reference coordinate system. There are five base elements (Table 2.2), i.e., Declaration, GroupOfEffects, Effect, ReferenceEffect, and Parameter, which are explained in detail in the following sections, extended from the abstract SEMBaseType type...



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.