zoukankan      html  css  js  c++  java
  • Vector Space Model unique words selected as dimensions

    Vector Space Model

    The basic idea is to represent each document as a vector of certain weighted word frequencies. In order to do so, the following parsing and extraction steps are needed.

    1. Ignoring case, extract all unique words from the entire set of documents.
    2. Eliminate non-content-bearing ``stopwords'' such as ``a'', ``and'', ``the'', etc. For sample lists of stopwords, see [#!frakes:baeza-yates!#, Chapter 7].
    3. For each document, count the number of occurrences of each word.
    4. Using heuristic or information-theoretic criteria, eliminate non-content-bearing ``high-frequency'' and ``low-frequency'' words  [#!salton:book!#].
    5. After the above elimination, suppose $ w$ unique words remain. Assign a unique identifier between $ 1$ and $ w$ to each remaining word, and a unique identifier between $ 1$ and $ d$ to each document.
    The above steps outline a simple preprocessing scheme. In addition, one may extract word phrases such as ``New York,'' and one may reduce each word to its ``root'' or ``stem'', thus eliminating plurals, tenses, prefixes, and suffixes [#!frakes:baeza-yates!#, Chapter 8].

    The above preprocessing yields the number of occurrences of word $ j$ in document $ i$, say, $ f_{ji}$, and the number of documents which contain the word $ j$, say, $ d_j$. Using these counts, we can represent the $ i$-th document as a $ w$-dimensional vector $ \mbox{$\mathbf{x}$}$$ _i$ as follows. For $ 1 \leq j \leq w$, set the $ j$-th component of $ \mbox{$\mathbf{x}$}$$ _i$, to be the product of three terms

    $\displaystyle x_{ji} = t_{ji} \cdot g_j \cdot s_i,
$
    where $ t_{ji}$ is the term weighting component and depends only on $ f_{ji}$, while $ g_j$ is the global weighting component and depends on $ d_j$, and $ s_i$ is the normalization component for $ \mbox{$\mathbf{x}$}$$ _i$. Intuitively, $ t_{ji}$ captures the relative importance of a word in a document, while $ g_{j}$ captures the overall importance of a word in the entire set of documents. The objective of such weighting schemes is to enhance discrimination between various document vectors for better retrieval effectiveness [#!salton:buckley!#].

    There are many schemes for selecting the term, global, and normalization components, see [#!kolda:thesis!#] for various possibilities. In this paper we use the popular $ {\sf tfn}$ scheme known as normalized term frequency-inverse document frequency. This scheme uses $ t_{ji} = f_{ji}$, $ g_j = \log ( d / d_j)$ and $ s_i = \left( \sum_{j=1}^w (t_{ji}g_j )^2 \right)^{-1/2}$. Note that this normalization implies that $ \Vert$   $ \mbox{$\mathbf{x}$}$$ _i \Vert = 1$, i.e., each document vector lies on the surface of the unit sphere in $ R^w$. Intuitively, the effect of normalization is to retain only the proportion of words occurring in a document. This ensures that documents dealing with the same subject matter (that is, using similar words), but differing in length lead to similar document vectors.

  • 相关阅读:
    hdu 5648 DZY Loves Math 组合数+深搜(子集法)
    hdu 5647 DZY Loves Connecting 树形DP
    hdu 4550 卡片游戏 贪心
    hdu 5646 DZY Loves Partition 二分+数学分析+递推
    hdu 2196 Computer 树形DP
    poj 2342 Anniversary party 树形DP入门
    Vijos P1003 等价表达式 随机数+单调栈
    【BZOJ】1044: [HAOI2008]木棍分割 二分+区间DP
    【BZOJ】1925: [Sdoi2010]地精部落 DP+滚动数组
    【BZOJ】1012: [JSOI2008]最大数maxnumber 树状数组求区间最值
  • 原文地址:https://www.cnblogs.com/cy163/p/751982.html
Copyright © 2011-2022 走看看