往前2篇的博客中,爬取了谣言百科网站中不同分类的新闻并以文本的形式存取下来啦。
上一篇博客中对存取的文件进行了中文分词操作,现在我们想要对存取的文本进行词频统计操作。
上代码:
# -*- coding: utf-8 -*- """ Created on Thu Mar 8 14:21:05 2018 @author: Administrator """ # 2017年7月4日17:08:15 # silei # 训练模型,查看效果 # 数据文件数一共1209个 # baby,car,food,health,legend,life,love,news,science,sexual # 130,130,130,130,130,130,130,130,130,39 # -*- coding:UTF-8 -*- dir = {'baby': 130,'car': 130,'food': 130,'health': 130,'legend': 130,'life': 130,'love': 130,'news': 130,'science': 130,'sexual': 39}# 设置词典,分别是类别名称和该类别下一共包含的文本数量 data_file_number = 0# 当前处理文件索引数 def MakeAllWordsList(train_datasseg):# 统计词频 all_words = {} for train_dataseg in train_datasseg: for word in train_dataseg: if word in all_words: all_words[word] += 1 else: all_words[word] = 1 # print("all_words length in all the train datas: ", len(all_words.keys()))# 所有出现过的词数目 all_words_reverse = sorted(all_words.items(), key=lambda f:f[1], reverse=True) # 内建函数sorted参数需为list # key函数利用词频进行降序排序 for all_word_reverse in all_words_reverse: print(all_word_reverse[0], " ", all_word_reverse[1]) all_words_list = [all_word_reverse[0] for all_word_reverse in all_words_reverse if len(all_word_reverse[0])>1] return all_words_list if __name__ == "__main__": for world_data_name,world_data_number in dir.items(): while (data_file_number < world_data_number): print(world_data_name) print(world_data_number) print(data_file_number) file = open('F:\test\'+world_data_name+'\'+str(data_file_number)+'.txt','r',encoding= 'UTF-8') MakeAllWordsList(file) for line in file: print(line+' ', end='') file.close()
运行完词频统计结束~