Latent Semantic Mapping: Principles & Applications
Latent semantic mapping (LSM) is a generalization of latent semantic analysis (LSA), a paradigm originally developed to capture hidden word patterns in a text document corpus. In information retrieval, LSA enables retrieval on the basis of conceptual content, instead of merely matching words between queries and documents. It operates under the assumption that there is some latent semantic structure in the data, which is partially obscured by the randomness of word choice with respect to retrieval. Algebraic and/or statistical techniques are brought to bear to estimate this structure and get rid of the obscuring noise. This results in a parsimonious continuous parameter description of words and documents, which then replaces the original parameterization in indexing and retrieval. This approach exhibits three main characteristics: Discrete entities (words and documents) are mapped onto a continuous vector space; this mapping is determined by global correlation patterns; and dimensionality reduction is an integral part of the process. Such fairly generic properties are advantageous in a variety of different contexts, which motivates a broader interpretation of the underlying paradigm. The outcome (LSM) is a data-driven framework for modeling meaningful global relationships implicit in large volumes of (not necessarily textual) data. This monograph gives a general overview of the framework, and underscores the multifaceted benefits it can bring to a number of problems in natural language understanding and spoken language processing. It concludes with a discussion of the inherent tradeoffs associated with the approach, and some perspectives on its general applicability to data-driven information extraction.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Other editions - View all
acoustic algorithms Alignment applications associated boundary region Chapter close co-occurrences command component composition language compute concatenation considered constraints context corpus data-driven decomposition deﬁne deﬁnition desktop interface control dimensionality reduction discontinuity distance distribution document clustering domain encoding example feature extraction ﬁle ﬁnal ﬁrst global grapheme IEEE inﬂuence information retrieval input iteration J. R. Bellegarda junk e-mail language modeling language processing latent semantic analysis LSM framework LSM paradigm LSM space LSM-based matrix measure metric multiple-utterance n-gram n-tuple natural language natural language processing optimal orthogonal orthographic outcome parameters phoneme pitch periods probabilistic probability Proc pronunciation reﬂects relevant representation resulting row vectors semantic anchors semantic classiﬁcation semantic inference sequence singular matrix singular value singular value decomposition singular vectors spam speaker veriﬁcation speciﬁc spectral speech recognition speech synthesis substrings task techniques tends topic total number tradeoff typically underlying unit selection units and compositions user’s vector space word clusters
Page 93 - S. Deerwester, ST Dumais, GW Furnas, TK Landauer, and R. Harshman. Indexing by latent semantic analysis.
Page 3 - ... to overcome a fundamental problem that plagues existing retrieval techniques that try to match words of queries with words of documents. The problem is that users want to retrieve on the basis of conceptual content, while individual words provide unreliable evidence about the conceptual meaning of a document.
Page 3 - LSA paradigm operates under the assumption that there is some underlying latent semantic structure in the data, which is partially obscured by the randomness of word choice with respect to retrieval. Algebraic and/or statistical techniques are brought to bear to estimate this latent structure and get rid of the obscuring "noise.