Video Content Analysis Using Multimodal Information : For Movie Content Extraction, Indexing and Representation

個数:

Video Content Analysis Using Multimodal Information : For Movie Content Extraction, Indexing and Representation

  • 提携先の海外書籍取次会社に在庫がございます。通常3週間で発送いたします。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合が若干ございます。
    2. 複数冊ご注文の場合、分割発送となる場合がございます。
    3. 美品のご指定は承りかねます。
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 224 p.
  • 言語 ENG
  • 商品コード 9781402074905
  • DDC分類 006.7

Full Description

With the fast growth ofmultimedia information, content-based video anal- ysis, indexing and representation have attracted increasing attention in re- cent years. Many applications have emerged in these areas such as video- on-demand, distributed multimedia systems, digital video libraries, distance learning/education, entertainment, surveillance and geographical information systems. The need for content-based video indexing and retrieval was also rec- ognized by ISOIMPEG, and a new international standard called "Multimedia Content Description Interface" (or in short, MPEG-7)was initialized in 1998 and finalized in September 2001. In this context, a systematic and thorough review ofexisting approaches as well as the state-of-the-art techniques in video content analysis, indexing and representation areas are investigated and studied in this book. In addition, we will specifically elaborate on a system which analyzes, indexes and abstracts movie contents based on the integration ofmultiple media modalities. Content ofeach part ofthis book is briefly previewed below.
In the first part, we segment a video sequence into a set ofcascaded shots, where a shot consistsofone or more continuouslyrecorded image frames. Both raw and compressedvideo data will beinvestigated. Moreover, consideringthat there are always non-story units in real TV programs such as commercials, a novel commercial break detection/extraction scheme is developed which ex- ploits both audio and visual cues to achieve robust results. Specifically, we first employ visual cues such as the video data statistics, the camera cut fre- quency, and the existenceofdelimiting black frames between commercials and programs, to obtain coarse-level detection results.

Contents

Dedication. List of Figures. List of Tables. Preface. Acknowledgments. 1: Introduction. 1. Audiovisual Content Analysis. 1.1. Audio Content Analysis. 1.2. Visual Content Analysis. 1.3. Audiovisual Content Analysis. 2. Video Indexing, Browsing and Abstraction. 3. MPEG-7 Standard. 4. Roadmap of The Book. 4.1. Video Segmentation. 4.2. Movie Content Analysis. 4.3. Movie Content Abstraction. 2: Background And Previous Work. 1. Visual Content Analysis. 1.1. Video Shot Detection. 1.2. Video Scene and Event Detection. 2. Audio Content Analysis. 2.1. Audio Segmentation and Classification. 2.2. Audio Analysis for Video Indexing. 3. Speaker Identification. 4. Video Abstraction. 4.1. Video Skimming. 4.2. Video Summarization. 5. Video Indexing and Retrieval. 3: Video Content Pre-Processing. 1. Shot Detection in Raw Data Domain. 1.1. YUV Color Space. 1.2. Metrics for Frame Differencing. 1.3. Camera Break Detection. 1.4. Gradual Transition Detection. 1.5. Camera Motion Detection. 1.6. Illumination Change Detection. 1.7. A Review of the Proposed System. 2. Shot Detection in Compressed Domain. 2.1. DC-image and DC-sequence. 3. Audio Feature Analysis. 4. Commercial Break Detection. 4.1. Features of A Commercial Break. 4.2. Feature Extraction. 4.3. The Proposed Detection Scheme. 5. Experimental Results. 5.1. Shot Detection Results. 5.2. Commercial Break Detection Results. 4: Content-Based Movie Scene And Event Extraction. 1. Movie Scene Extraction. 1.1. Sink-based Scene Construction. 1.2. Audiovisual-based Scene Refinement. 1.3. User Interaction. 2. Movie Event Extraction. 2.1. Sink Clustering and Categorization. 2.2. Event Extraction and Classification. 2.3. Integrating Speech and Face Information. 3. Experimental Results. 3.1. Scene Extraction Results. 3.2. Event Extraction Results. 5: Speaker Identification For Movies. 1. Supervised Speaker Identification for Movie Dialogs. 1.1. Feature Selection and Extraction. 1.2. Gaussian Mixture Model. 1.3. Likelihood Calculation and Score Normalization. 1.4. Speech Segment Isolation. 2. Adaptive Speaker Identification. 2.1. Face Detection, Recognition and Mouth Tracking. 2.2. Speech Segmentation and Clustering. 2.3. Initial Speaker Modeling. 2.4. Likelihood-based Speaker Identification. 2.5. Audiovisual Integration for Speaker Identification. 2.6. Unsupervised Speaker Model Adaptation. 3. Experimental Results. 3.1. Supervised Speaker Identification Results. 3.2. Adaptive Speaker Identification Results. 3.3. An Example of Movie Content Annotation. 6: Scene-Based Movie Summarization. 1. An Overview of the Proposed System. 2. Hierarchical Keyframe Extraction. 2.1. Scene Importance Computation. 2.2. Sink Importance Computation. 2.3. Sh