情報検索のための言語モデリング<br>Language Modeling for Information Retrieval (Information Retrieval Book Series, 13)

個数:

情報検索のための言語モデリング
Language Modeling for Information Retrieval (Information Retrieval Book Series, 13)

  • 提携先の海外書籍取次会社に在庫がございます。通常3週間で発送いたします。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合が若干ございます。
    2. 複数冊ご注文の場合、分割発送となる場合がございます。
    3. 美品のご指定は承りかねます。
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 258 p.
  • 言語 ENG
  • 商品コード 9781402012167
  • DDC分類 025.04

基本説明

Contains the first collection of papers: Contents: Probabilistic Relevance Models Based on Document and Query Generation; A Probabilistic Approach to Term Translation for Cross-Lingual Retrieval; Using Compression-Based Language Models for Text Categorization; and more.

Full Description

A statisticallanguage model, or more simply a language model, is a prob­ abilistic mechanism for generating text. Such adefinition is general enough to include an endless variety of schemes. However, a distinction should be made between generative models, which can in principle be used to synthesize artificial text, and discriminative techniques to classify text into predefined cat­ egories. The first statisticallanguage modeler was Claude Shannon. In exploring the application of his newly founded theory of information to human language, Shannon considered language as a statistical source, and measured how weH simple n-gram models predicted or, equivalently, compressed natural text. To do this, he estimated the entropy of English through experiments with human subjects, and also estimated the cross-entropy of the n-gram models on natural 1 text. The ability of language models to be quantitatively evaluated in tbis way is one of their important virtues. Of course, estimating the true entropy of language is an elusive goal, aiming at many moving targets, since language is so varied and evolves so quickly. Yet fifty years after Shannon's study, language models remain, by all measures, far from the Shannon entropy liInit in terms of their predictive power. However, tbis has not kept them from being useful for a variety of text processing tasks, and moreover can be viewed as encouragement that there is still great room for improvement in statisticallanguage modeling.

Contents

1 Probabilistic Relevance Models Based on Document and Query Generation.- 2 Relevance Models in Information Retrieval.- 3 Language Modeling and Relevance.- 4 Contributions of Language Modeling to the Theory and Practice of IR.- 5 Language Models for Topic Tracking.- 6 A Probabilistic Approach to Term Translation for Cross-Lingual Retrieval.- 7 Using Compression-Based Language Models for Text Categorization.- 8 Applications of Score Distributions in Information Retrieval.- 9 An Unbiased Generative Model for Setting Dissemination Thresholds.- 10 Language Modeling Experiments in Non-Extractive Summarization.