zoukankan      html  css  js  c++  java
  • 基于概率论的分类方法:朴素贝叶斯

     

    前两章我们要求分类器做出决策,给出“该数据实例属于哪一类”这类问题的明确答案。

    不过,分类器有时会产生错误结果,这时可以要求分类器给出一个最优的类别猜测结果,同时给出这个猜测的概率估计值。

    假设有一个数据集,由两类数据组成,如下所示

    用p1(x,y)表示数据点(x,y)属于类别1(圆点)的概率

    用p2(x,y)表示数据点(x,y)属于类别2(三角形点)的概率

    那么对于一个新的数据点(x,y),可以用下面的规则判断它的类别:

    if p1(x,y)>p2(x,y),then class1

    if p2(x,y)>p1(x,y),then class2

    也就是说,选择高概率对应的类别。这就是贝叶斯决策理论的核心思想,即选择具有最高概率的决策。

     

    这里需要用到条件概率公式,来源百度百科

     

    朴素贝叶斯是用于文档分类的常用算法。我们可以观察文档中出现的词,并把每个词的出现或不出现作为一个特征,这样得到的特征数目就会跟词汇表中得到的特征数目一样多。

    朴素贝叶斯分类器中的另一个假设是,每个特征同等重要。这里即单词出现的可能性与它和其他单词相邻没有关系。

     

    一个例子,使用Python进行文本分类。

    1.词表到向量的转换函数

    # 返回进行词条切分后的文档集合和人工标注的类别标签的集合
    def loadDataSet():
        postingList = [['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],
                       ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],
                       ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],
                       ['stop', 'posting', 'stupid', 'worthless', 'garbage'],
                       ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],
                       ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
        classVec = [0, 1, 0, 1, 0, 1]  # 1代表存在侮辱性的文字,0代表不存在
        return postingList, classVec
    
    
    # 统计所有文档中出现的词条
    def createVocabList(dataSet):
        vocabSet = set([])
        for document in dataSet:
            # 创建两个集合的并集
            vocabSet = vocabSet | set(document)
        return list(vocabSet)
    
    
    def setOfWords2Vec(vocabList, inputSet):
        returnVec = [0] * len(vocabList)
        for word in inputSet:
            if word in vocabList:
                returnVec[vocabList.index(word)] = 1
            else:
                print("the word: %s is not in my Vocabulary!" % word)
                # 输出文档向量,向量的每一元素为1或0
                # 分别表示词汇表中的单词在输入文档中是否出现
        return returnVec

    测试运行

     

    2.训练算法:从词向量计算概率

    伪代码:

    计算每个类别中的文档数目

    对每篇训练文档:

      对每个类别:

        如果词条出现在文档中->增加该词条的计数值

        增加所有词条的计数值

      对每个类别:

        对每个词条:

          将该词条出现的数目除以总词条数目得到条件概率

    返回每个类别的条件概率

    # 朴素贝叶斯分类器训练函数
    def trainNB0(trainMatrix, trainCategory):
        # 获取文档总数
        numTrainDocs = len(trainMatrix)
        # 获取词条向量的长度
        numWords = len(trainMatrix[0])
        # 类1占所有文档的比例
        pAbusive = sum(trainCategory) / float(numTrainDocs)
        # p0Num=zeros(numWords)
        # p1Num=zeros(numWords)
        # p0Denom=0.0
        # p1Denom=0.0
        p0Num = ones(numWords)
        p1Num = ones(numWords)
        p0Denom = 2.0
        p1Denom = 2.0
        for i in range(numTrainDocs):
            if trainCategory[i] == 1:
                # 向量加法,统计所有类别为1的词条向量中各个词条出现的次数
                p1Num += trainMatrix[i]
                # 统计类别为1的词条向量中出现的所有词条的总数
                # 即统计类1所有文档中出现单词的数目
                p1Denom += sum(trainMatrix[i])
            else:
                p0Num += trainMatrix[i]
                p0Denom += sum(trainMatrix[i])
        # 利用NumPy数组计算p(wi|c1)
        # p1Vect = p1Num / p1Denom
        # p0Vect = p0Num / p0Denom
        p1Vect = log(p1Num / p1Denom)
        p0Vect = log(p0Num / p0Denom)
        return p0Vect, p1Vect, pAbusive

    测试运行

     

    3.测试算法:根据现实情况修改分类器

     (1)

    p0Num=ones(numWords);
    p1Num=ones(numWords)
    p0Denom=2.0;
    p1Denom=2.0

    (2)解决下溢出:用ln(f(x))替换f(x)

    分类函数

    def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):
        p1 = sum(vec2Classify * p1Vec) + log(pClass1)
        p0 = sum(vec2Classify * p0Vec) + log(1.0 - pClass1)
        if p1 > p0:
            return 1
        else:
            return 0

    测试

    listOPosts, listClasses = loadDataSet()
        myVocabList = createVocabList(listOPosts)
        print(myVocabList)
        print(listOPosts[0])
        print(setOfWords2Vec(myVocabList, listOPosts[0]))
        print(listOPosts[3])
        print(setOfWords2Vec(myVocabList, listOPosts[3]))
        trainMat = []
        for postinDoc in listOPosts:
            trainMat.append(setOfWords2Vec(myVocabList, postinDoc))
        p0V, p1V, pAb = trainNB0(trainMat, listClasses)
        print(p0V)
        print(p1V)
        print(pAb)
    
        testEntry = ['love', 'my', 'dalmation']
        thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
        print(testEntry, 'classified as:', classifyNB(thisDoc, p0V, p1V, pAb))
    
        testEntry = ['stupid', 'garbage']
        thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
        print(testEntry, 'classified as:', classifyNB(thisDoc, p0V, p1V, pAb))

     

    4.准备词袋模型

     

    def bagOfWords2VecMN(vocabList, inputSet):
        returnVec = [0] * len(vocabList)
        for word in inputSet:
            if word in vocabList:
                returnVec[vocabList.index(word)] += 1
        return returnVec

     

    5.使用朴素贝叶斯过滤垃圾邮件

    def textParse(bigString):
        import re
        listOfTokens = re.split(r'W*', bigString)
        return [tok.lower() for tok in listOfTokens if len(tok) > 2]
    
    
    def spanTest():
        docList = []
        classList = []
        fullText = []
        for i in range(1, 26):
            wordList = textParse(open('email/spam/%d.txt' % i).read())
            docList.append(wordList)
            fullText.extend(wordList)
            classList.append(1)
            wordList = textParse(open('email/ham/%d.txt' % i).read())
            docList.append(wordList)
            fullText.extend(wordList)
            classList.append(0)
        vocabList = createVocabList(docList)
        trainingSet = list(range(50))
        testSet = []
        for i in range(10):
            randIndex = int(random.uniform(0, len(trainingSet)))
            testSet.append(trainingSet[randIndex])
            del (trainingSet[randIndex])
        trainMat = []
        trainClasses = []
        for docIndex in trainingSet:
            trainMat.append(setOfWords2Vec(vocabList, docList[docIndex]))
            trainClasses.append(classList[docIndex])
        p0V, p1V, pSpam = trainNB0(array(trainMat), array(trainClasses))
        errorCount = 0
        for docIndex in testSet:
            wordVector = setOfWords2Vec(vocabList, docList[docIndex])
            if classifyNB(array(wordVector), p0V, p1V, pSpam) != classList[docIndex]:
                errorCount += 1
                print('classification error')
        print('the error rate is: ', float(errorCount) / len(testSet))

    运行,每次结果不尽相同

    6.使用朴素贝叶斯分类器从个人广告中获取区域倾向

     需要安装feedparser包

    (1)收集数据:导入RSS源

    RSS源分类器及高频词去除函数

    # 实例:使用朴素贝叶斯分类器从个人广告中获取区域倾向
    # RSS源分类器及高频词去除函数
    def calcMostFreq(vocabList, fullText):
        freqDict = {}
        for token in vocabList:
            # 计算每个单词出现的次数
            freqDict[token] = fullText.count(token)
        # 按照逆序从大到小对freqDict进行排序
        sortedFreq = sorted(freqDict.items(), key=operator.itemgetter(1), reverse=True)
        # 返回前30个高频单词
        return sortedFreq[:30]
    
    
    def localWords(feed1, feed0):
        docList = [];
        classList = [];
        fullText = []
        # 求两个源长度较小的那个长度值
        minLen = min(len(feed1['entries']), len(feed0['entries']))
        for i in range(minLen):
            # 每次访问一条RSS源
            wordList = textParse(feed1['entries'][i]['summary'])
            docList.append(wordList)
            fullText.extend(wordList)
            classList.append(1)
            wordList = textParse(feed0['entries'][i]['summary'])
            docList.append(wordList)
            fullText.extend(wordList)
            classList.append(0)
        vocabList = createVocabList(docList)
        # 得到在两个源中出现次数最高的30个单词
        top30Words = calcMostFreq(vocabList, fullText)
        for pairW in top30Words:
            if pairW[0] in vocabList:
                # 从词汇表中把高频的30个词移除
                vocabList.remove(pairW[0])
        trainingSet = list(range(2 * minLen))
        testSet = []
        # 从两个rss源中挑出20条作为测试文本
        for i in range(20):
            randIndex = int(random.uniform(0, len(trainingSet)))
            testSet.append(trainingSet[randIndex])
            del (trainingSet[randIndex])
            trainMat = []
            trainClasses = []
            # 训练文本
            for docIndex in trainingSet:
                trainMat.append(bagOfWords2VecMN(vocabList, docList[docIndex]))
                trainClasses.append(classList[docIndex])
            p0V, p1V, pSpam = trainNB0(array(trainMat), array(trainClasses))
            errorCount = 0
            # 计算分类,和错误率
            for docIndex in testSet:
                wordVector = bagOfWords2VecMN(vocabList, docList[docIndex])
            if classifyNB(array(wordVector), p0V, p1V, pSpam) != classList[docIndex]:
                errorCount += 1
        print('the error rate is: ', float(errorCount) / len(testSet))
        return vocabList, p0V, p1V

    (2)分析数据:显示地狱相关的用词

    最具表征性的词汇显示函数

    def getTopWords(ny, sf):  # 返回频率大于某个阈值的所有值
        vocabList, p0V, p1V = localWords(ny, sf)
        topNY = []
        topSF = []
        for i in range(len(p0V)):
            if p0V[i] > -4.5:
                topSF.append((vocabList[i], p0V[i]))
            if p1V[i] > -4.5:
                topNY.append((vocabList[i], p1V[i]))
        sortedSF = sorted(topSF, key=lambda pair: pair[1], reverse=True)
        print("SF**SF**SF**SF**SF**SF**SF**SF**SF**SF**SF")
        for item in sortedSF:
            print(item[0])
    
        sortedNY = sorted(topNY, key=lambda pair: pair[1], reverse=True)
        print("NY**NY**NY**NY**NY**NY**NY**NY**NY**NY**NY")
        for item in sortedNY:
            print(item[0])

    完整代码

    from numpy import *
    import feedparser
    import operator
    
    
    # 返回进行词条切分后的文档集合和人工标注的类别标签的集合
    def loadDataSet():
        postingList = [['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],
                       ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],
                       ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],
                       ['stop', 'posting', 'stupid', 'worthless', 'garbage'],
                       ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],
                       ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
        classVec = [0, 1, 0, 1, 0, 1]  # 1代表存在侮辱性的文字,0代表不存在
        return postingList, classVec
    
    
    # 统计所有文档中出现的词条
    def createVocabList(dataSet):
        vocabSet = set([])
        for document in dataSet:
            # 创建两个集合的并集
            vocabSet = vocabSet | set(document)
        return list(vocabSet)
    
    
    def setOfWords2Vec(vocabList, inputSet):
        returnVec = [0] * len(vocabList)
        for word in inputSet:
            if word in vocabList:
                returnVec[vocabList.index(word)] = 1
            else:
                print("the word: %s is not in my Vocabulary!" % word)
                # 输出文档向量,向量的每一元素为1或0
                # 分别表示词汇表中的单词在输入文档中是否出现
        return returnVec
    
    
    def bagOfWords2VecMN(vocabList, inputSet):
        returnVec = [0] * len(vocabList)
        for word in inputSet:
            if word in vocabList:
                returnVec[vocabList.index(word)] += 1
        return returnVec
    
    
    # 朴素贝叶斯分类器训练函数
    def trainNB0(trainMatrix, trainCategory):
        # 获取文档总数
        numTrainDocs = len(trainMatrix)
        # 获取词条向量的长度
        numWords = len(trainMatrix[0])
        # 类1占所有文档的比例
        pAbusive = sum(trainCategory) / float(numTrainDocs)
        # p0Num=zeros(numWords)
        # p1Num=zeros(numWords)
        # p0Denom=0.0
        # p1Denom=0.0
        p0Num = ones(numWords)
        p1Num = ones(numWords)
        p0Denom = 2.0
        p1Denom = 2.0
        for i in range(numTrainDocs):
            if trainCategory[i] == 1:
                # 向量加法,统计所有类别为1的词条向量中各个词条出现的次数
                p1Num += trainMatrix[i]
                # 统计类别为1的词条向量中出现的所有词条的总数
                # 即统计类1所有文档中出现单词的数目
                p1Denom += sum(trainMatrix[i])
            else:
                p0Num += trainMatrix[i]
                p0Denom += sum(trainMatrix[i])
        # 利用NumPy数组计算p(wi|c1)
        # p1Vect = p1Num / p1Denom
        # p0Vect = p0Num / p0Denom
        p1Vect = log(p1Num / p1Denom)
        p0Vect = log(p0Num / p0Denom)
        return p0Vect, p1Vect, pAbusive
    
    
    def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):
        p1 = sum(vec2Classify * p1Vec) + log(pClass1)
        p0 = sum(vec2Classify * p0Vec) + log(1.0 - pClass1)
        if p1 > p0:
            return 1
        else:
            return 0
    
    
    def textParse(bigString):
        import re
        listOfTokens = re.split(r'W*', bigString)
        return [tok.lower() for tok in listOfTokens if len(tok) > 2]
    
    
    def spanTest():
        docList = []
        classList = []
        fullText = []
        for i in range(1, 26):
            wordList = textParse(open('email/spam/%d.txt' % i).read())
            docList.append(wordList)
            fullText.extend(wordList)
            classList.append(1)
            wordList = textParse(open('email/ham/%d.txt' % i).read())
            docList.append(wordList)
            fullText.extend(wordList)
            classList.append(0)
        vocabList = createVocabList(docList)
        trainingSet = list(range(50))
        testSet = []
        for i in range(10):
            randIndex = int(random.uniform(0, len(trainingSet)))
            testSet.append(trainingSet[randIndex])
            del (trainingSet[randIndex])
        trainMat = []
        trainClasses = []
        for docIndex in trainingSet:
            trainMat.append(setOfWords2Vec(vocabList, docList[docIndex]))
            trainClasses.append(classList[docIndex])
        p0V, p1V, pSpam = trainNB0(array(trainMat), array(trainClasses))
        errorCount = 0
        for docIndex in testSet:
            wordVector = setOfWords2Vec(vocabList, docList[docIndex])
            if classifyNB(array(wordVector), p0V, p1V, pSpam) != classList[docIndex]:
                errorCount += 1
                print('classification error')
        print('the error rate is: ', float(errorCount) / len(testSet))
    
    
    # 实例:使用朴素贝叶斯分类器从个人广告中获取区域倾向
    # RSS源分类器及高频词去除函数
    def calcMostFreq(vocabList, fullText):
        freqDict = {}
        for token in vocabList:
            # 计算每个单词出现的次数
            freqDict[token] = fullText.count(token)
        # 按照逆序从大到小对freqDict进行排序
        sortedFreq = sorted(freqDict.items(), key=operator.itemgetter(1), reverse=True)
        # 返回前30个高频单词
        return sortedFreq[:30]
    
    
    def localWords(feed1, feed0):
        docList = [];
        classList = [];
        fullText = []
        # 求两个源长度较小的那个长度值
        minLen = min(len(feed1['entries']), len(feed0['entries']))
        for i in range(minLen):
            # 每次访问一条RSS源
            wordList = textParse(feed1['entries'][i]['summary'])
            docList.append(wordList)
            fullText.extend(wordList)
            classList.append(1)
            wordList = textParse(feed0['entries'][i]['summary'])
            docList.append(wordList)
            fullText.extend(wordList)
            classList.append(0)
        vocabList = createVocabList(docList)
        # 得到在两个源中出现次数最高的30个单词
        top30Words = calcMostFreq(vocabList, fullText)
        for pairW in top30Words:
            if pairW[0] in vocabList:
                # 从词汇表中把高频的30个词移除
                vocabList.remove(pairW[0])
        trainingSet = list(range(2 * minLen))
        testSet = []
        # 从两个rss源中挑出20条作为测试文本
        for i in range(20):
            randIndex = int(random.uniform(0, len(trainingSet)))
            testSet.append(trainingSet[randIndex])
            del (trainingSet[randIndex])
            trainMat = []
            trainClasses = []
            # 训练文本
            for docIndex in trainingSet:
                trainMat.append(bagOfWords2VecMN(vocabList, docList[docIndex]))
                trainClasses.append(classList[docIndex])
            p0V, p1V, pSpam = trainNB0(array(trainMat), array(trainClasses))
            errorCount = 0
            # 计算分类,和错误率
            for docIndex in testSet:
                wordVector = bagOfWords2VecMN(vocabList, docList[docIndex])
            if classifyNB(array(wordVector), p0V, p1V, pSpam) != classList[docIndex]:
                errorCount += 1
        print('the error rate is: ', float(errorCount) / len(testSet))
        return vocabList, p0V, p1V
    
    
    def getTopWords(ny, sf):  # 返回频率大于某个阈值的所有值
        vocabList, p0V, p1V = localWords(ny, sf)
        topNY = []
        topSF = []
        for i in range(len(p0V)):
            if p0V[i] > -4.5:
                topSF.append((vocabList[i], p0V[i]))
            if p1V[i] > -4.5:
                topNY.append((vocabList[i], p1V[i]))
        sortedSF = sorted(topSF, key=lambda pair: pair[1], reverse=True)
        print("SF**SF**SF**SF**SF**SF**SF**SF**SF**SF**SF")
        for item in sortedSF:
            print(item[0])
    
        sortedNY = sorted(topNY, key=lambda pair: pair[1], reverse=True)
        print("NY**NY**NY**NY**NY**NY**NY**NY**NY**NY**NY")
        for item in sortedNY:
            print(item[0])
    
    
    if __name__ == '__main__':
        listOPosts, listClasses = loadDataSet()
        myVocabList = createVocabList(listOPosts)
        print(myVocabList)
        print(listOPosts[0])
        print(setOfWords2Vec(myVocabList, listOPosts[0]))
        print(listOPosts[3])
        print(setOfWords2Vec(myVocabList, listOPosts[3]))
        trainMat = []
        for postinDoc in listOPosts:
            trainMat.append(setOfWords2Vec(myVocabList, postinDoc))
        p0V, p1V, pAb = trainNB0(trainMat, listClasses)
        print(p0V)
        print(p1V)
        print(pAb)
    
        testEntry = ['love', 'my', 'dalmation']
        thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
        print(testEntry, 'classified as:', classifyNB(thisDoc, p0V, p1V, pAb))
    
        testEntry = ['stupid', 'garbage']
        thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
        print(testEntry, 'classified as:', classifyNB(thisDoc, p0V, p1V, pAb))
    
        spanTest()
        spanTest()
    
        ny = feedparser.parse('http://newyork.craigslist.org/stp/index.rss')
        sf = feedparser.parse('http://sfbay.craigslist.org/stp/index.rss')
        # (ny, sf)
        getTopWords(ny, sf)
    bayes.py

     

  • 相关阅读:
    谷歌浏览器中安装JsonView扩展程序
    谷歌浏览器中安装Axure扩展程序
    PreferencesUtils【SharedPreferences操作工具类】
    Eclipse打包出错——提示GC overhead limit exceeded
    IntentActionUtil【Intent的常见作用的工具类】
    DeviceUuidFactory【获取设备唯一标识码的UUID(加密)】【需要运行时权限的处理的配合】
    AndroidStudio意外崩溃,电脑重启,导致重启打开Androidstudio后所有的import都出错
    DateTimeHelper【日期类型与字符串互转以及日期对比相关操作】
    ACache【轻量级的开源缓存框架】
    WebUtils【MD5加密(基于MessageDigest)】
  • 原文地址:https://www.cnblogs.com/wangkaipeng/p/7891917.html
Copyright © 2011-2022 走看看