下载一首英文的歌词或文章
f = open("F:\song.txt","r") str1=f.read() f.close()
将所有,.?!’:等分隔符全部替换为空格
c="',./'" for w in c: str1.replace(w,' ')
将所有大写转换为小写
生成单词列表
wordList=str1.lower().split()
生成词频统计
wordDict={}
wordSet=set(wordList)
for w in wordSet: wordDict[w]=wordList.count(w)
排序
dictList=list(wordDict.items()) dictList.sort(key=lambda x:x[1],reverse=True)
排除语法型词汇,代词、冠词、连词
pron={'for','the','of','to','that','/'} wordSet=set(wordList)-pron
输出词频最大TOP20
将分析对象存为utf-8编码的文件,通过文件读取的方式获得词频分析内容。
f=open("F:\song1.txt",'w') for i in range(20): f.write(dictList[i][0]+" "+str(dictList[i][1] )+'\n') f.close()
2.中文词频统计
下载一长篇中文文章。
从文件读取待分析文本。
news = open('gzccnews.txt','r',encoding = 'utf-8')
安装与使用jieba进行中文分词。
pip install jieba
import jieba
list(jieba.lcut(news))
生成词频统计
排序
排除语法型词汇,代词、冠词、连词
输出词频最大TOP20(或把结果存放到文件里)
import jieba f=open('s.txt','r',encoding="UTF-8") str1=f.read() f.close() str2=list(jieba.cut(str1)) delset = {",","。",":","“","”","?"," ",";","!","、","\ufeff","\n"} stringset = set(str2) - delset countdict = {} for i in stringset: countdict[i] = str2.count(i) dictList = list(countdict.items()) dictList.sort(key = lambda x:x[1],reverse = True) f = open("E:/结果.txt", "a") for i in range(20): f.write('\n' + dictList[i][0] + " " + str(dictList[i][1])) f.close()