前言
本文的文字及图片来源于网络,仅供学习、交流使用,不具有任何商业用途,版权归原作者所有,如有问题请及时联系我们以作处理。
如今各大网站的反爬机制已经可以说是到了丧心病狂的程度,比如大众点评的字符加密、微博的登录验证等。相比较而言,新闻网站的反爬机制就要稍微弱一点。因此今天以新浪新闻为例,分析如何通过Python爬虫按关键词抓取相关的新闻。
首先,如果从新闻直接进行搜索,你会发现其内容最多显示20页,因此我们要从新浪的首页进行搜索,这样才没有页数的限制。
网页结构分析
<div class="pagebox" id="_function_code_page"> <b><span class="pagebox_cur_page">1</span></b> <a href="javascript:;" onclick="getNewsData('https://interface.sina.cn/homepage/search.d.json?t=&q=%E6%97%85%E6%B8%B8&pf=0&ps=0&page=2')" title="第2页">2</a> <a href="javascript:;" onclick="getNewsData('https://interface.sina.cn/homepage/search.d.json?t=&q=%E6%97%85%E6%B8%B8&pf=0&ps=0&page=3')" title="第3页">3</a> <a href="javascript:;" onclick="getNewsData('https://interface.sina.cn/homepage/search.d.json?t=&q=%E6%97%85%E6%B8%B8&pf=0&ps=0&page=4')" title="第4页">4</a> <a href="javascript:;" onclick="getNewsData('https://interface.sina.cn/homepage/search.d.json?t=&q=%E6%97%85%E6%B8%B8&pf=0&ps=0&page=5')" title="第5页">5</a> <a href="javascript:;" onclick="getNewsData('https://interface.sina.cn/homepage/search.d.json?t=&q=%E6%97%85%E6%B8%B8&pf=0&ps=0&page=6')" title="第6页">6</a> <a href="javascript:;" onclick="getNewsData('https://interface.sina.cn/homepage/search.d.json?t=&q=%E6%97%85%E6%B8%B8&pf=0&ps=0&page=7')" title="第7页">7</a> <a href="javascript:;" onclick="getNewsData('https://interface.sina.cn/homepage/search.d.json?t=&q=%E6%97%85%E6%B8%B8&pf=0&ps=0&page=8')" title="第8页">8</a> <a href="javascript:;" onclick="getNewsData('https://interface.sina.cn/homepage/search.d.json?t=&q=%E6%97%85%E6%B8%B8&pf=0&ps=0&page=9')" title="第9页">9</a> <a href="javascript:;" onclick="getNewsData('https://interface.sina.cn/homepage/search.d.json?t=&q=%E6%97%85%E6%B8%B8&pf=0&ps=0&page=10')" title="第10页">10</a> <a href="javascript:;" onclick="getNewsData('https://interface.sina.cn/homepage/search.d.json?t=&q=%E6%97%85%E6%B8%B8&pf=0&ps=0&page=2');" title="下一页">下一页</a> </div>
进入新浪网并进行关键字搜索之后,发现无论如何翻页网址都不会变,但是网页的内容却更新了,经验告诉我这是通过ajax完成的,因此我把新浪的网页代码拿下来看了看。
显而易见,每一次翻页都是通过点击a标签向一个地址发送请求,如果你直接将这个地址放入浏览器的地址栏并回车:
那么恭喜你,收到错误了
认真看一下html的onclick,发现它是调用了一个叫getNewsData的函数,因此在相关的js文件中查找一下这个函数,可以看出它是在每次ajax请求之前构造了请求的url,并且使用get请求,返回的数据格式为jsonp(跨域)。
因此我们只要模仿它的请求格式就可以获取数据了。
var loopnum = 0; function getNewsData(url){ var oldurl = url; if(!key){ $("#result").html("<span>无搜索热词</span>"); return false; } if(!url){ url = 'https://interface.sina.cn/homepage/search.d.json?q='+encodeURIComponent(key); } var stime = getStartDay(); var etime = getEndDay(); url +='&stime='+stime+'&etime='+etime+'&sort=rel&highlight=1&num=10&ie=utf-8'; //'&from=sina_index_hot_words&sort=time&highlight=1&num=10&ie=utf-8'; $.ajax({ type: 'GET', dataType: 'jsonp', cache : false, url:url, success: //回调函数太长了就不写了 })
发送请求
import requests headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0", } params = { "t":"", "q":"旅游", "pf":"0", "ps":"0", "page":"1", "stime":"2019-03-30", "etime":"2020-03-31", "sort":"rel", "highlight":"1", "num":"10", "ie":"utf-8" } response = requests.get("https://interface.sina.cn/homepage/search.d.json?", params=params, headers=headers) print(response)
这次使用的是requests库,构造相同的url,并发送请求。结果收到的结果是冷冰冰的403Forbidden:
因此重新回到网站看看到底哪里出现了问题
从开发者工具中找到返回的json文件,并查看请求头,发现它的请求头带有cookie,因此在构造headers时我们直接复制它的请求头即可。再次运行,response200!剩下的就简单了,只需要将返回的数据解析后写入Excel。
完整代码
import requests import json import xlwt def getData(page, news): headers = { "Host": "interface.sina.cn", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0", "Accept": "*/*", "Accept-Language": "zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2", "Accept-Encoding": "gzip, deflate, br", "Connection": "keep-alive", "Referer": r"http://www.sina.com.cn/mid/search.shtml?range=all&c=news&q=%E6%97%85%E6%B8%B8&from=home&ie=utf-8", "Cookie": "ustat=__172.16.93.31_1580710312_0.68442000; genTime=1580710312; vt=99; Apache=9855012519393.69.1585552043971; SINAGLOBAL=9855012519393.69.1585552043971; ULV=1585552043972:1:1:1:9855012519393.69.1585552043971:; historyRecord={'href':'https://news.sina.cn/','refer':'https://sina.cn/'}; SMART=0; dfz_loc=gd-default", "TE": "Trailers" } params = { "t":"", "q":"旅游", "pf":"0", "ps":"0", "page":page, "stime":"2019-03-30", "etime":"2020-03-31", "sort":"rel", "highlight":"1", "num":"10", "ie":"utf-8" } response = requests.get("https://interface.sina.cn/homepage/search.d.json?", params=params, headers=headers) dic = json.loads(response.text) news += dic["result"]["list"] return news def writeData(news): workbook = xlwt.Workbook(encoding = 'utf-8') worksheet = workbook.add_sheet('MySheet') worksheet.write(0, 0, "标题") worksheet.write(0, 1, "时间") worksheet.write(0, 2, "媒体") worksheet.write(0, 3, "网址") for i in range(len(news)): print(news[i]) worksheet.write(i+1, 0, news[i]["origin_title"]) worksheet.write(i+1, 1, news[i]["datetime"]) worksheet.write(i+1, 2, news[i]["media"]) worksheet.write(i+1, 3, news[i]["url"]) workbook.save('data.xls') def main(): news = [] for i in range(1,501): news = getData(i, news) writeData(news) if __name__ == '__main__': main()