.中文分词
- 下载一中文长篇小说,并转换成UTF-8编码。
- 使用jieba库,进行中文词频统计,输出TOP20的词及出现次数。
- 排除一些无意义词、合并同一词。
import jieba txt=open('test.txt','r',encoding='UTF-8').read() for i in '''.,- u3000'()":“”,。‘’?''': txt=txt.replace(i,'') words=list(jieba.cut(txt)) exc={'了','么','吗','他','我','你','她','是','在','也','都','的','但','很','着'} dic={} keys=set(words) keys=keys-exc for w in keys: dic[w]=words.count(w) wc = list(dic.items()) wc.sort(key=lambda x:x[1],reverse=True) for w in range(20): print(wc[w])
利用循环语句去除单字词
import jieba txt=open('test.txt','r',encoding='UTF-8').read() for i in '''.,- u3000'()":“”,。‘’?''': txt=txt.replace(i,'') words=list(jieba.cut(txt)) dic={} for w in words: if len(w)==1: continue else: dic[w]=dic.get(w,0)+1 wc = list(dic.items()) wc.sort(key=lambda x:x[1],reverse=True) for w in range(20): print(wc[w])