中间件的简介
1.中间件的作用
在scrapy运行的整个过程中,对scrapy框架运行的某些步骤做一些适配自己项目的动作.
例如scrapy内置的HttpErrorMiddleware,可以在http请求出错时做一些处理.
2.中间件的使用方法
配置settings.py.详见scrapy文档 https://doc.scrapy.org
中间件的分类
scrapy的中间件理论上有三种(Schduler Middleware,Spider Middleware,Downloader Middleware),在应用上一般有以下两种
1.爬虫中间件Spider Middleware
主要功能是在爬虫运行过程中进行一些处理.
2.下载器中间件Downloader Middleware
主要功能在请求到网页后,页面被下载时进行一些处理.
中间件的方法
1.Spider Middleware有以下几个函数被管理:
- process_spider_input 接收一个response对象并处理,
位置是Downloader-->process_spider_input-->Spiders(Downloader和Spiders是scrapy官方结构图中的组件)
- process_spider_exception spider出现的异常时被调用
- process_spider_output 当Spider处理response返回result时,该方法被调用
- process_start_requests 当spider发出请求时,被调用
位置是Spiders-->process_start_requests-->Scrapy Engine(Scrapy Engine是scrapy官方结构图中的组件)
2.Downloader Middleware有以下几个函数被管理
- process_request request通过下载中间件时,该方法被调用
- process_response 下载结果经过中间件时被此方法处理
- process_exception 下载过程中出现异常时被调用
编写中间件时,需要思考要实现的功能最适合在那个过程处理,就编写哪个方法.
中间件可以用来处理请求,处理结果或者结合信号协调一些方法的使用等.也可以在原有的爬虫上添加适应项目的其他功能,这一点在扩展中编写也可以达到目的,实际上扩展更加去耦合化,推荐使用扩展.
代码示例
下载中间件代码示例
1 from scrapy.http import HtmlResponse 2 from scrapy.http import Request 3 4 class Md1(object): 5 @classmethod 6 def from_crawler(cls, crawler): 7 # This method is used by Scrapy to create your spiders. 8 s = cls() 9 return s 10 11 def process_request(self, request, spider): 12 # Called for each request that goes through the downloader 13 # middleware. 14 15 # Must either: 16 # - return None: continue processing this request 17 # - or return a Response object 18 # - or return a Request object 19 # - or raise IgnoreRequest: process_exception() methods of 20 # installed downloader middleware will be called 21 print('md1.process_request',request) 22 # 1. 返回Response 23 # import requests 24 # result = requests.get(request.url) 25 # return HtmlResponse(url=request.url, status=200, headers=None, body=result.content) 26 # 2. 返回Request 27 # return Request('https://dig.chouti.com/r/tec/hot/1') 28 29 # 3. 抛出异常 30 # from scrapy.exceptions import IgnoreRequest 31 # raise IgnoreRequest 32 33 # 4. 对请求进行加工(*) 34 # request.headers['user-agent'] = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36" 35 36 37 def process_response(self, request, response, spider): 38 # Called with the response returned from the downloader. 39 40 # Must either; 41 # - return a Response object 42 # - return a Request object 43 # - or raise IgnoreRequest 44 print('m1.process_response',request,response) 45 return response 46 47 def process_exception(self, request, exception, spider): 48 # Called when a download handler or a process_request() 49 # (from other downloader middleware) raises an exception. 50 51 # Must either: 52 # - return None: continue processing this exception 53 # - return a Response object: stops process_exception() chain 54 # - return a Request object: stops process_exception() chain
配置
1 DOWNLOADER_MIDDLEWARES = { 2 #'xdb.middlewares.XdbDownloaderMiddleware': 543, 3 # 'xdb.proxy.XdbProxyMiddleware':751, 4 'xdb.md.Md1':666, 5 'xdb.md.Md2':667, 6 }
爬虫中间件下载示例
编写类
class Sd1(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the spider middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() return s def process_spider_input(self, response, spider): # Called for each response that goes through the spider # middleware and into the spider. # Should return None or raise an exception. return None def process_spider_output(self, response, result, spider): # Called with the results returned from the Spider, after # it has processed the response. # Must return an iterable of Request, dict or Item objects. for i in result: yield i def process_spider_exception(self, response, exception, spider): # Called when a spider or process_spider_input() method # (from other spider middleware) raises an exception. # Should return either None or an iterable of Response, dict # or Item objects. pass # 只在爬虫启动时,执行一次。 def process_start_requests(self, start_requests, spider): # Called with the start requests of the spider, and works # similarly to the process_spider_output() method, except # that it doesn’t have a response associated. # Must return only requests (not items). for r in start_requests: yield r
配置
1 SPIDER_MIDDLEWARES = { 2 # 'xdb.middlewares.XdbSpiderMiddleware': 543, 3 'xdb.sd.Sd1': 666, 4 'xdb.sd.Sd2': 667, 5 }