zoukankan      html  css  js  c++  java
  • 02-NLP-02-朴素贝叶斯与应用

    朴素贝叶斯与应用 

    贝叶斯理论简单回顾

    在我们有一大堆样本(包含特征类别)的时候,我们非常容易通过统计得到 p(|)p(特征|类别).

    大家又都很熟悉下述公式:

    p(x)p(y|x)=p(y)p(x|y)p(x)p(y|x)=p(y)p(x|y)

    所以做一个小小的变换

    p()p(|)=p()p(|)p(特征)p(类别|特征)=p(类别)p(特征|类别)
    p(|)=p()p(|)p()p(类别|特征)=p(类别)p(特征|类别)/p(特征)

    独立假设

    看起来很简单,但实际上,你的特征可能是很多维的

    p(features|class)=p(f0,f1,,fn|c)p(features|class)=p(f0,f1,…,fn|c)

    就算是2个维度吧,可以简单写成

    p(f0,f1|c)=p(f1|c,f0)p(f0|c)p(f0,f1|c)=p(f1|c,f0)p(f0|c)

    这时候我们加一个特别牛逼的假设:特征之间是独立的。这样就得到了

    p(f0,f1|c)=p(f1|c)p(f0|c)p(f0,f1|c)=p(f1|c)p(f0|c)

    其实也就是:

    p(f0,f1,,fn|c)=Πnip(fi|c)p(f0,f1,…,fn|c)=Πinp(fi|c)

    贝叶斯分类器

    OK,回到机器学习,其实我们就是对每个类别计算一个概率p(ci)p(ci),然后再计算所有特征的条件概率p(fj|ci)p(fj|ci),那么分类的时候我们就是依据贝叶斯找一个最可能的类别:

    p(classi|f0,f1,,fn)=p(classi)p(f0,f1,,fn)Πnjp(fj|ci)p(classi|f0,f1,…,fn)=p(classi)p(f0,f1,…,fn)Πjnp(fj|ci)

    文本分类问题

    下面我们来看一个文本分类问题,经典的新闻主题分类,用朴素贝叶斯怎么做。

    In [2]:
    #coding: utf-8
    import os
    import time
    import random
    import jieba  #处理中文
    #import nltk  #处理英文
    import sklearn
    from sklearn.naive_bayes import MultinomialNB
    import numpy as np
    import pylab as pl
    import matplotlib.pyplot as plt
    In [4]:
    #粗暴的词去重
    def make_word_set(words_file):
        words_set = set()   #利用集合来遍历训练集,收集所有出现的词
        with open(words_file, 'r') as fp:
            for line in fp.readlines():
                word = line.strip().decode("utf-8")
                if len(word)>0 and word not in words_set: # 去重
                    words_set.add(word)
        return words_set
    In [5]:
    # 文本处理,也就是样本生成过程
    #搜狗的数据集是将每一个类别的若干个文本都放在了子文件夹中
    
    def text_processing(folder_path, test_size=0.2):
        folder_list = os.listdir(folder_path)
        data_list = []
        class_list = []
    
        # 遍历文件夹,并将每个文件夹中内容读取出来
        for folder in folder_list:
            new_folder_path = os.path.join(folder_path, folder)
            files = os.listdir(new_folder_path)
            # 读取文件
            j = 1
            for file in files:
                if j > 100: # 怕内存爆掉,只取100个样本文件,你可以注释掉取完
                    break
                with open(os.path.join(new_folder_path, file), 'r') as fp:
                   raw = fp.read()
                ## 是的,随处可见的jieba中文分词
                jieba.enable_parallel(4) # 开启并行分词模式,参数为并行进程数,不支持windows
                word_cut = jieba.cut(raw, cut_all=False) # cut进行分词,精确模式,返回的结构是一个可迭代的genertor
                word_list = list(word_cut) # genertor转化为list,每个词unicode格式
                jieba.disable_parallel() # 关闭并行分词模式
                
                data_list.append(word_list) #训练集list(数据列表)
                class_list.append(folder.decode('utf-8')) #类别列表
                j += 1
        
        ## 粗暴地划分训练集和测试集:取前80%的作为训练集,20%作为测试集
        data_class_list = zip(data_list, class_list)
        random.shuffle(data_class_list)
        index = int(len(data_class_list)*test_size)+1
        train_list = data_class_list[index:]
        test_list = data_class_list[:index]
        train_data_list, train_class_list = zip(*train_list)
        test_data_list, test_class_list = zip(*test_list)
        
        #其实可以用sklearn自带的部分做
        #train_data_list, test_data_list, train_class_list, test_class_list = sklearn.cross_validation.train_test_split(data_list, class_list, test_size=test_size)
        
    
        # 统计词频放入all_words_dict
        #手动划分,遍历所有词,遇到了就把它对应的频次加1,
        #便于统计完成之后,在字典中将比较重要的词放在比较前面,这样在后续处理的时候可以不用到所有的词只用部分出现频次较高的词
        all_words_dict = {}
        for word_list in train_data_list:
            for word in word_list:
                if all_words_dict.has_key(word):
                    all_words_dict[word] += 1
                else:
                    all_words_dict[word] = 1
    
        # key函数利用词频进行降序排序
        all_words_tuple_list = sorted(all_words_dict.items(), key=lambda f:f[1], reverse=True) # 内建函数sorted参数需为list
        all_words_list = list(zip(*all_words_tuple_list)[0])
    
        return all_words_list, train_data_list, test_data_list, train_class_list, test_class_list
    In [6]:
    def words_dict(all_words_list, deleteN, stopwords_set=set()):
        # 选取特征词,从上述all_words_dict列表中抽取1000
        feature_words = []
        n = 1
        for t in range(deleteN, len(all_words_list), 1):
            if n > 1000: # feature_words的维度1000
                break
                
            if not all_words_list[t].isdigit() and all_words_list[t] not in stopwords_set and 1<len(all_words_list[t])<5:
                feature_words.append(all_words_list[t])
                n += 1
        return feature_words
    In [7]:
    # 文本特征
    def text_features(train_data_list, test_data_list, feature_words, flag='nltk'):
        def text_features(text, feature_words):
            text_words = set(text)
            ## -----------------------------------------------------------------------------------
            if flag == 'nltk':
                ## nltk特征 dict
                features = {word:1 if word in text_words else 0 for word in feature_words}
            elif flag == 'sklearn':
                ## sklearn特征 list,如果在词袋中出现就记录1,最后得到一个很长条的列表
            #对每个样本都生成这样一个列表来表示每个词的出现与否
    features = [1 if word in text_words else 0 for word in feature_words] else: features = [] ## ----------------------------------------------------------------------------------- return features train_feature_list = [text_features(text, feature_words) for text in train_data_list] test_feature_list = [text_features(text, feature_words) for text in test_data_list] return train_feature_list, test_feature_list
    In [8]:
    # 分类,同时输出准确率等
    def text_classifier(train_feature_list, test_feature_list, train_class_list, test_class_list, flag='nltk'):
        ## -----------------------------------------------------------------------------------
        if flag == 'nltk':
            ## 使用nltk分类器
            train_flist = zip(train_feature_list, train_class_list)
            test_flist = zip(test_feature_list, test_class_list)
            classifier = nltk.classify.NaiveBayesClassifier.train(train_flist)
            test_accuracy = nltk.classify.accuracy(classifier, test_flist)
        elif flag == 'sklearn':
            ## sklearn分类器
            classifier = MultinomialNB().fit(train_feature_list, train_class_list)
            test_accuracy = classifier.score(test_feature_list, test_class_list)
        else:
            test_accuracy = []
        return test_accuracy
    In [13]:
    print "start"
    
    ## 文本预处理
    folder_path = './Database/SogouC/Sample'   #由于这里取得只是一个样本数据,不是完整的语料库,所以准确类不是很高,只有50-70%的准确率
    all_words_list, train_data_list, test_data_list, train_class_list, test_class_list = text_processing(folder_path, test_size=0.2)
    
    # 生成stopwords_set
    stopwords_file = './stopwords_cn.txt'
    stopwords_set = make_word_set(stopwords_file)
    
    ## 文本特征提取和分类
    # flag = 'nltk'
    flag = 'sklearn'
    deleteNs = range(0, 1000, 20)
    test_accuracy_list = []
    for deleteN in deleteNs:
        # feature_words = words_dict(all_words_list, deleteN)
        feature_words = words_dict(all_words_list, deleteN, stopwords_set)
        train_feature_list, test_feature_list = text_features(train_data_list, test_data_list, feature_words, flag)
        test_accuracy = text_classifier(train_feature_list, test_feature_list, train_class_list, test_class_list, flag)
        test_accuracy_list.append(test_accuracy)
    print test_accuracy_list
    
    # 结果评价
    #plt.figure()
    plt.plot(deleteNs, test_accuracy_list)
    plt.title('Relationship of deleteNs and test_accuracy')
    plt.xlabel('deleteNs')
    plt.ylabel('test_accuracy')
    plt.show()
    #plt.savefig('result.png')
    
    print "finished"
    start
    [0.63157894736842102, 0.63157894736842102, 0.63157894736842102, 0.57894736842105265, 0.63157894736842102, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.63157894736842102, 0.63157894736842102, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.63157894736842102, 0.68421052631578949, 0.63157894736842102, 0.63157894736842102, 0.57894736842105265, 0.52631578947368418, 0.63157894736842102, 0.63157894736842102, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.63157894736842102, 0.57894736842105265, 0.68421052631578949, 0.57894736842105265, 0.63157894736842102, 0.63157894736842102, 0.63157894736842102, 0.63157894736842102, 0.63157894736842102, 0.68421052631578949, 0.63157894736842102, 0.57894736842105265, 0.57894736842105265, 0.57894736842105265, 0.63157894736842102, 0.63157894736842102, 0.63157894736842102]
    finished


  • 相关阅读:
    App.js – 用于移动 Web App 开发的 JS 界面库
    【入门必备】最佳的 Node.js 学习教程和资料书籍
    Fort.js – 时尚、现代的表单填写进度提示效果
    单页网站不是梦,几款国外的单页网站创建工具
    Numeral.js – 格式化和操作数字的 JavaScript 库
    ShortcutMapper – 热门应用程序的可视化快捷键
    Origami – 用于 Quartz 的免费的交互设计框架
    20款时尚的 WordPress 简洁主题【免费下载】
    JSCapture – 基于 HTML5 实现的屏幕捕捉库
    推荐12款实用的 JavaScript 书页翻转效果插件
  • 原文地址:https://www.cnblogs.com/Josie-chen/p/9125045.html
Copyright © 2011-2022 走看看