zoukankan      html  css  js  c++  java
  • TopicsExtraction with NMF & LDA

    """
    =======================================================================================
    Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation
    =======================================================================================
    
    This is an example of applying Non-negative Matrix Factorization
    and Latent Dirichlet Allocation on a corpus of documents and
    extract additive models of the topic structure of the corpus.
    The output is a list of topics, each represented as a list of terms
    (weights are not shown).
    
    The default parameters (n_samples / n_features / n_topics) should make
    the example runnable in a couple of tens of seconds. You can try to
    increase the dimensions of the problem, but be aware that the time
    complexity is polynomial in NMF. In LDA, the time complexity is
    proportional to (n_samples * iterations).
    """
    
    # Author: Olivier Grisel <olivier.grisel@ensta.org>
    #         Lars Buitinck <L.J.Buitinck@uva.nl>
    #         Chyi-Kwei Yau <chyikwei.yau@gmail.com>
    # License: BSD 3 clause
    
    from __future__ import print_function
    from time import time
    
    from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
    from sklearn.decomposition import NMF, LatentDirichletAllocation
    from sklearn.datasets import fetch_20newsgroups
    
    n_samples = 2000
    n_features = 1000
    n_topics = 10
    n_top_words = 20
    
    
    def print_top_words(model, feature_names, n_top_words):
        for topic_idx, topic in enumerate(model.components_):
            print("Topic #%d:" % topic_idx)
            print(" ".join([feature_names[i]
                            for i in topic.argsort()[:-n_top_words - 1:-1]]))
        print()
    
    
    # Load the 20 newsgroups dataset and vectorize it. We use a few heuristics
    # to filter out useless terms early on: the posts are stripped of headers,
    # footers and quoted replies, and common English words, words occurring in
    # only one document or in at least 95% of the documents are removed.
    
    print("Loading dataset...")
    t0 = time()
    dataset = fetch_20newsgroups(shuffle=True, random_state=1,
                                 remove=('headers', 'footers', 'quotes'))
    data_samples = dataset.data
    print("done in %0.3fs." % (time() - t0))
    
    # Use tf-idf features for NMF.
    print("Extracting tf-idf features for NMF...")
    tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, #max_features=n_features,
                                       stop_words='english')
    t0 = time()
    tfidf = tfidf_vectorizer.fit_transform(data_samples)
    print("done in %0.3fs." % (time() - t0))
    
    # Use tf (raw term count) features for LDA.
    print("Extracting tf features for LDA...")
    tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, max_features=n_features,
                                    stop_words='english')
    t0 = time()
    tf = tf_vectorizer.fit_transform(data_samples)
    print("done in %0.3fs." % (time() - t0))
    
    # Fit the NMF model
    print("Fitting the NMF model with tf-idf features,"
          "n_samples=%d and n_features=%d..."
          % (n_samples, n_features))
    t0 = time()
    nmf = NMF(n_components=n_topics, random_state=1, alpha=.1, l1_ratio=.5).fit(tfidf)
    exit()
    print("done in %0.3fs." % (time() - t0))
    
    print("
    Topics in NMF model:")
    tfidf_feature_names = tfidf_vectorizer.get_feature_names()
    print_top_words(nmf, tfidf_feature_names, n_top_words)
    
    print("Fitting LDA models with tf features, n_samples=%d and n_features=%d..."
          % (n_samples, n_features))
    lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,
                                    learning_method='online', learning_offset=50.,
                                    random_state=0)
    t0 = time()
    lda.fit(tf)
    print("done in %0.3fs." % (time() - t0))
    
    print("
    Topics in LDA model:")
    tf_feature_names = tf_vectorizer.get_feature_names()
    print_top_words(lda, tf_feature_names, n_top_words)
    
  • 相关阅读:
    Hostker云主机
    Orz 终于有了自己的博客地址
    BZOJ 1635: [Usaco2007 Jan]Tallest Cow 最高的牛
    BZOJ 1636: [Usaco2007 Jan]Balanced Lineup
    BZOJ 2252: [2010Beijing wc]矩阵距离
    BZOJ 2253: [2010 Beijing wc]纸箱堆叠
    BZOJ 无数据题集合
    BZOJ 1087: [SCOI2005]互不侵犯King
    BZOJ 3236: [Ahoi2013]作业
    POJ2352:Stars
  • 原文地址:https://www.cnblogs.com/stevenlk/p/6507043.html
Copyright © 2011-2022 走看看