一、
使用request库的get()函数访问360搜索网页20次并且打印返回状态,text内容,计算text()属性和content()属性所返回网页内容的长度。
对360搜索主页进行爬虫:
利用request库的get函数访问google 20次,输入代码为:
import requests wan="https://www.so.com/" def pac(wan): print("第",i+1,"次访问") r=requests.get(wan,timeout=30) r.raise_for_status() print("text编码方式为",r.encoding) print("网络状态码为:",r.status_code) print("text属性:",r.text) print("content属性:",r.content) return r.text for i in range(20): print(pac(wan))
由于结果太长,这里将代码改为打印text属性和content属性的长度后展示最后一次访问的结果,代码改动:
print("text属性长度:",len(r.text)) print("content属性长度:",len(r.content))
二、
这是一个简单的html页面,请保持为字符串,完成后面的计算要求。
a.打印head标签内容和你的学号后两位
b 获取body标签内容
c 获取id为first的标签对象
d 获取并打印html页面中的中文字符
html为:
1 <!DOCTYPE html> 2 3 <html> 4 5 <head> 6 7 <meta charset="utf-8"> 8 9 <title>菜鸟教程(runoob.com)</title> 10 11 </head> 12 13 <body> 14 15 <hl>我的第一个标题学号25</hl> 16 17 <p id="first">我的第一个段落。</p> 18 19 </body> 20 21 <table border="1"> 22 23 <tr> 24 25 <td>row 1, cell 1</td> 26 27 <td>row 1, cell 2</td> 28 29 </tr> 30 31 <tr> 32 33 <td>row 2, cell 1</td> 34 35 <td>row 2, cell 2</td> 36 37 <tr> 38 39 </table> 40 41 </html>
菜鸟教程运行结果:
相关计算代码:
from bs4 import BeautifulSoup import re soup=BeautifulSoup('''<!DOCTYPE html> <html1> <head> <meta charset="utf-8"> <title>菜鸟教程(runoob.com)</title> </head> <body> <hl>我的第一标题</hl> <p id="first">我的第一个段落。</p> </body> <table border="1"> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> <tr> </table> </html>''') print("打印head标签和我的学号") print(soup.head,"我的学号:07") print("获取body标签内容",soup.body) print("获取id为first的标签对象",soup.find_all(id="first")) st=soup.text pp = re.findall(u'[u1100-uFFFDh]+?',st) print("获取并打印html页面中的中文字符") print(pp)
运行结果:
三、爬中国大学排名网站内容(http://www.zuihaodaxue.cn/zuihaodaxuepaiming2018.html)
把爬取的数据,存为csv文件。
代码:
import csv import os import requests from bs4 import BeautifulSoup allUniv = [] def getHTMLText(url): try: r = requests.get(url, timeout=30) r.raise_for_status() r.encoding ='utf-8' return r.text except: return "" def fillUnivList(soup): data = soup.find_all('tr') for tr in data: ltd = tr.find_all('td') if len(ltd)==0: continue singleUniv = [] for td in ltd: singleUniv.append(td.string) allUniv.append(singleUniv) def writercsv(save_road,num,title): if os.path.isfile(save_road): with open(save_road,'a',newline='')as f: csv_write=csv.writer(f,dialect='excel') for i in range(num): u=allUniv[i] csv_write.writerow(u) else: with open(save_road,'w',newline='')as f: csv_write=csv.writer(f,dialect='excel') csv_write.writerow(title) for i in range(num): u=allUniv[i] csv_write.writerow(u) title=["排名","学校名称","省市","总分","生源质量","培养结果","科研规模", "科研质量","顶尖成果","顶尖人才","科技服务","产学研究合作","成果转化","学生国际化"] save_road="D:\pm.csv" def main(): url = 'http://www.zuihaodaxue.com/zuihaodaxuepaiming2018.html' html = getHTMLText(url) soup = BeautifulSoup(html, "html.parser") fillUnivList(soup) writercsv(save_road,20,title) main()