zoukankan      html  css  js  c++  java
  • Scrapy

    Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中。
    其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。

    Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下

    Scrapy主要包括了以下组件:

    • 引擎(Scrapy)
      用来处理整个系统的数据流处理, 触发事务(框架核心)
    • 调度器(Scheduler)
      用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址
    • 下载器(Downloader)
      用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)
    • 爬虫(Spiders)
      爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面
    • 项目管道(Pipeline)
      负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。
    • 下载器中间件(Downloader Middlewares)
      位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。
    • 爬虫中间件(Spider Middlewares)
      介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。
    • 调度中间件(Scheduler Middewares)
      介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。

    Scrapy运行流程大概如下:

    1. 引擎从调度器中取出一个链接(URL)用于接下来的抓取
    2. 引擎把URL封装成一个请求(Request)传给下载器
    3. 下载器把资源下载下来,并封装成应答包(Response)
    4. 爬虫解析Response
    5. 解析出实体(Item),则交给实体管道进行进一步的处理
    6. 解析出的是链接(URL),则把URL交给调度器等待抓取

    一、安装

    Linux
          pip install scrapy
     
     
    Windows
          a. pip install wheel
          b. 下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
          c. 进入下载目录,执行 pip3 install Twisted‑17.1.0‑cp35‑cp35m‑win_amd64.whl
          d. pip install scrapy
          e. 下载并安装pywin32:https://sourceforge.net/projects/pywin32/files/

    二、基本使用

    1. 基本命令

    1. scrapy startproject 项目名称
       - 在当前目录中创建中创建一个项目文件(类似于Django)
     
    2. scrapy genspider [-t template] <name> <domain>
       - 创建爬虫应用
       如:
          scrapy gensipider -t basic oldboy oldboy.com
          scrapy gensipider -t xmlfeed autohome autohome.com.cn
       PS:
          查看所有命令:scrapy gensipider -l
          查看模板命令:scrapy gensipider -d 模板名称
     
    3. scrapy list
       - 展示爬虫应用列表
     
    4. scrapy crawl 爬虫应用名称
       - 运行单独爬虫应用
     

    2.项目结构以及爬虫应用简介

    project_name/
       scrapy.cfg
       project_name/
           __init__.py
           items.py
           pipelines.py
           settings.py
           spiders/
               __init__.py
               爬虫1.py
               爬虫2.py
               爬虫3.py

    文件说明:

    • scrapy.cfg  项目的主配置信息。(真正爬虫相关的配置信息在settings.py文件中)
    • items.py    设置数据存储模板,用于结构化数据,如:Django的Model
    • pipelines    数据处理行为,如:一般结构化的数据持久化
    • settings.py 配置文件,如:递归的层数、并发数,延迟下载等
    • spiders      爬虫目录,如:创建文件,编写爬虫规则
     1 import scrapy
     2  
     3 class XiaoHuarSpider(scrapy.spiders.Spider):
     4     name = "xiaohuar"                            # 爬虫名称 *****
     5     allowed_domains = ["xiaohuar.com"]  # 允许的域名
     6     start_urls = [
     7         "http://www.xiaohuar.com/hua/",   # 其实URL
     8     ]
     9  
    10     def parse(self, response):
    11         # 访问起始URL并获取结果后的回调函数
    12 
    13 爬虫1.py
    View Code
    import sys,os
    sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')

     1 import scrapy
     2 from scrapy.selector import HtmlXPathSelector
     3 from scrapy.http.request import Request
     4  
     5  
     6 class DigSpider(scrapy.Spider):
     7     # 爬虫应用的名称,通过此名称启动爬虫命令
     8     name = "dig"
     9  
    10     # 允许的域名
    11     allowed_domains = ["chouti.com"]
    12  
    13     # 起始URL
    14     start_urls = [
    15         'http://dig.chouti.com/',
    16     ]
    17  
    18     has_request_set = {}
    19  
    20     def parse(self, response):
    21         print(response.url)
    22  
    23         hxs = HtmlXPathSelector(response)
    24         page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/d+")]/@href').extract()
    25         for page in page_list:
    26             page_url = 'http://dig.chouti.com%s' % page
    27             key = self.md5(page_url)
    28             if key in self.has_request_set:
    29                 pass
    30             else:
    31                 self.has_request_set[key] = page_url
    32                 obj = Request(url=page_url, method='GET', callback=self.parse)
    33                 yield obj
    34  
    35     @staticmethod
    36     def md5(val):
    37         import hashlib
    38         ha = hashlib.md5()
    39         ha.update(bytes(val, encoding='utf-8'))
    40         key = ha.hexdigest()
    41         return key
    View Code

    执行此爬虫文件,则在终端进入项目目录执行如下命令:

    scrapy crawl dig --nolog

    对于上述代码重要之处在于:

    • Request是一个封装用户请求的类,在回调函数中yield该对象表示继续访问
    • HtmlXpathSelector用于结构化HTML代码并提供选择器功能
     1 #!/usr/bin/env python
     2 # -*- coding:utf-8 -*-
     3 from scrapy.selector import Selector, HtmlXPathSelector
     4 from scrapy.http import HtmlResponse
     5 html = """<!DOCTYPE html>
     6 <html>
     7     <head lang="en">
     8         <meta charset="UTF-8">
     9         <title></title>
    10     </head>
    11     <body>
    12         <ul>
    13             <li class="item-"><a id='i1' href="link.html">first item</a></li>
    14             <li class="item-0"><a id='i2' href="llink.html">first item</a></li>
    15             <li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li>
    16         </ul>
    17         <div><a href="llink2.html">second item</a></div>
    18     </body>
    19 </html>
    20 """
    21 response = HtmlResponse(url='http://example.com', body=html,encoding='utf-8')
    22 # hxs = HtmlXPathSelector(response)
    23 # print(hxs)
    24 # hxs = Selector(response=response).xpath('//a')
    25 # print(hxs)
    26 # hxs = Selector(response=response).xpath('//a[2]')
    27 # print(hxs)
    28 # hxs = Selector(response=response).xpath('//a[@id]')
    29 # print(hxs)
    30 # hxs = Selector(response=response).xpath('//a[@id="i1"]')
    31 # print(hxs)
    32 # hxs = Selector(response=response).xpath('//a[@href="link.html"][@id="i1"]')
    33 # print(hxs)
    34 # hxs = Selector(response=response).xpath('//a[contains(@href, "link")]')
    35 # print(hxs)
    36 # hxs = Selector(response=response).xpath('//a[starts-with(@href, "link")]')
    37 # print(hxs)
    38 # hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]')
    39 # print(hxs)
    40 # hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]/text()').extract()
    41 # print(hxs)
    42 # hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]/@href').extract()
    43 # print(hxs)
    44 # hxs = Selector(response=response).xpath('/html/body/ul/li/a/@href').extract()
    45 # print(hxs)
    46 # hxs = Selector(response=response).xpath('//body/ul/li/a/@href').extract_first()
    47 # print(hxs)
    48  
    49 # ul_list = Selector(response=response).xpath('//body/ul/li')
    50 # for item in ul_list:
    51 #     v = item.xpath('./a/span')
    52 #     # 或
    53 #     # v = item.xpath('a/span')
    54 #     # 或
    55 #     # v = item.xpath('*/a/span')
    56 #     print(v)
    选择器
     1 # -*- coding: utf-8 -*-
     2 import scrapy
     3 from scrapy.selector import HtmlXPathSelector
     4 from scrapy.http.request import Request
     5 from scrapy.http.cookies import CookieJar
     6 from scrapy import FormRequest
     7 
     8 
     9 class ChouTiSpider(scrapy.Spider):
    10     # 爬虫应用的名称,通过此名称启动爬虫命令
    11     name = "chouti"
    12     # 允许的域名
    13     allowed_domains = ["chouti.com"]
    14 
    15     cookie_dict = {}
    16     has_request_set = {}
    17 
    18     def start_requests(self):
    19         url = 'http://dig.chouti.com/'
    20         # return [Request(url=url, callback=self.login)]
    21         yield Request(url=url, callback=self.login)
    22 
    23     def login(self, response):
    24         cookie_jar = CookieJar()
    25         cookie_jar.extract_cookies(response, response.request)
    26         for k, v in cookie_jar._cookies.items():
    27             for i, j in v.items():
    28                 for m, n in j.items():
    29                     self.cookie_dict[m] = n.value
    30 
    31         req = Request(
    32             url='http://dig.chouti.com/login',
    33             method='POST',
    34             headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
    35             body='phone=8615131255089&password=pppppppp&oneMonth=1',
    36             cookies=self.cookie_dict,
    37             callback=self.check_login
    38         )
    39         yield req
    40 
    41     def check_login(self, response):
    42         req = Request(
    43             url='http://dig.chouti.com/',
    44             method='GET',
    45             callback=self.show,
    46             cookies=self.cookie_dict,
    47             dont_filter=True
    48         )
    49         yield req
    50 
    51     def show(self, response):
    52         # print(response)
    53         hxs = HtmlXPathSelector(response)
    54         news_list = hxs.select('//div[@id="content-list"]/div[@class="item"]')
    55         for new in news_list:
    56             # temp = new.xpath('div/div[@class="part2"]/@share-linkid').extract()
    57             link_id = new.xpath('*/div[@class="part2"]/@share-linkid').extract_first()
    58             yield Request(
    59                 url='http://dig.chouti.com/link/vote?linksId=%s' %(link_id,),
    60                 method='POST',
    61                 cookies=self.cookie_dict,
    62                 callback=self.do_favor
    63             )
    64 
    65         page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/d+")]/@href').extract()
    66         for page in page_list:
    67 
    68             page_url = 'http://dig.chouti.com%s' % page
    69             import hashlib
    70             hash = hashlib.md5()
    71             hash.update(bytes(page_url,encoding='utf-8'))
    72             key = hash.hexdigest()
    73             if key in self.has_request_set:
    74                 pass
    75             else:
    76                 self.has_request_set[key] = page_url
    77                 yield Request(
    78                     url=page_url,
    79                     method='GET',
    80                     callback=self.show
    81                 )
    82 
    83     def do_favor(self, response):
    84         print(response.text)
    85 
    86 示例:自动登陆抽屉并点赞
    登录点赞事例

     格式化处理

     1 import scrapy
     2 from scrapy.selector import HtmlXPathSelector
     3 from scrapy.http.request import Request
     4 from scrapy.http.cookies import CookieJar
     5 from scrapy import FormRequest
     6 
     7 
     8 class XiaoHuarSpider(scrapy.Spider):
     9     # 爬虫应用的名称,通过此名称启动爬虫命令
    10     name = "xiaohuar"
    11     # 允许的域名
    12     allowed_domains = ["xiaohuar.com"]
    13 
    14     start_urls = [
    15         "http://www.xiaohuar.com/list-1-1.html",
    16     ]
    17     # custom_settings = {
    18     #     'ITEM_PIPELINES':{
    19     #         'spider1.pipelines.JsonPipeline': 100
    20     #     }
    21     # }
    22     has_request_set = {}
    23 
    24     def parse(self, response):
    25         # 分析页面
    26         # 找到页面中符合规则的内容(校花图片),保存
    27         # 找到所有的a标签,再访问其他a标签,一层一层的搞下去
    28 
    29         hxs = HtmlXPathSelector(response)
    30 
    31         items = hxs.select('//div[@class="item_list infinite_scroll"]/div')
    32         for item in items:
    33             src = item.select('.//div[@class="img"]/a/img/@src').extract_first()
    34             name = item.select('.//div[@class="img"]/span/text()').extract_first()
    35             school = item.select('.//div[@class="img"]/div[@class="btns"]/a/text()').extract_first()
    36             url = "http://www.xiaohuar.com%s" % src
    37             from ..items import XiaoHuarItem
    38             obj = XiaoHuarItem(name=name, school=school, url=url)
    39             yield obj
    40 
    41         urls = hxs.select('//a[re:test(@href, "http://www.xiaohuar.com/list-1-d+.html")]/@href')
    42         for url in urls:
    43             key = self.md5(url)
    44             if key in self.has_request_set:
    45                 pass
    46             else:
    47                 self.has_request_set[key] = url
    48                 req = Request(url=url,method='GET',callback=self.parse)
    49                 yield req
    50 
    51     @staticmethod
    52     def md5(val):
    53         import hashlib
    54         ha = hashlib.md5()
    55         ha.update(bytes(val, encoding='utf-8'))
    56         key = ha.hexdigest()
    57         return key
    58 
    59 spiders/xiahuar.py
    spiders/xiahuar.py
    1 import scrapy
    2 
    3 
    4 class XiaoHuarItem(scrapy.Item):
    5     name = scrapy.Field()
    6     school = scrapy.Field()
    7     url = scrapy.Field()
    items
     1 import json
     2 import os
     3 import requests
     4 
     5 
     6 class JsonPipeline(object):
     7     def __init__(self):
     8         self.file = open('xiaohua.txt', 'w')
     9 
    10     def process_item(self, item, spider):
    11         v = json.dumps(dict(item), ensure_ascii=False)
    12         self.file.write(v)
    13         self.file.write('
    ')
    14         self.file.flush()
    15         return item
    16 
    17 
    18 class FilePipeline(object):
    19     def __init__(self):
    20         if not os.path.exists('imgs'):
    21             os.makedirs('imgs')
    22 
    23     def process_item(self, item, spider):
    24         response = requests.get(item['url'], stream=True)
    25         file_name = '%s_%s.jpg' % (item['name'], item['school'])
    26         with open(os.path.join('imgs', file_name), mode='wb') as f:
    27             f.write(response.content)
    28         return item
    pipelines
    ITEM_PIPELINES = {
       'spider1.pipelines.JsonPipeline': 100,
       'spider1.pipelines.FilePipeline': 300,
    }
    # 每行后面的整型值,确定了他们运行的顺序,item按数字从低到高的顺序,通过pipeline,通常将这些数字定义在0-1000范围内。
    

      对于pipeline可以做更多,如下:

    from scrapy.exceptions import DropItem
    
    class CustomPipeline(object):
        def __init__(self,v):
            self.value = v
    
        def process_item(self, item, spider):
            # 操作并进行持久化
    
            # return表示会被后续的pipeline继续处理
            return item
    
            # 表示将item丢弃,不会被后续pipeline处理
            # raise DropItem()
    
    
        @classmethod
        def from_crawler(cls, crawler):
            """
            初始化时候,用于创建pipeline对象
            :param crawler: 
            :return: 
            """
            val = crawler.settings.getint('MMMM')
            return cls(val)
    
        def open_spider(self,spider):
            """
            爬虫开始执行时,调用
            :param spider: 
            :return: 
            """
            print('000000')
    
        def close_spider(self,spider):
            """
            爬虫关闭时,被调用
            :param spider: 
            :return: 
            """
            print('111111')
    
    自定义pipeline
    

      

    中间件

     1 class SpiderMiddleware(object):
     2 
     3     def process_spider_input(self,response, spider):
     4         """
     5         下载完成,执行,然后交给parse处理
     6         :param response: 
     7         :param spider: 
     8         :return: 
     9         """
    10         pass
    11 
    12     def process_spider_output(self,response, result, spider):
    13         """
    14         spider处理完成,返回时调用
    15         :param response:
    16         :param result:
    17         :param spider:
    18         :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
    19         """
    20         return result
    21 
    22     def process_spider_exception(self,response, exception, spider):
    23         """
    24         异常调用
    25         :param response:
    26         :param exception:
    27         :param spider:
    28         :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
    29         """
    30         return None
    31 
    32 
    33     def process_start_requests(self,start_requests, spider):
    34         """
    35         爬虫启动时调用
    36         :param start_requests:
    37         :param spider:
    38         :return: 包含 Request 对象的可迭代对象
    39         """
    40         return start_requests
    爬虫中间件
     1 class DownMiddleware1(object):
     2     def process_request(self, request, spider):
     3         """
     4         请求需要被下载时,经过所有下载器中间件的process_request调用
     5         :param request: 
     6         :param spider: 
     7         :return:  
     8             None,继续后续中间件去下载;
     9             Response对象,停止process_request的执行,开始执行process_response
    10             Request对象,停止中间件的执行,将Request重新调度器
    11             raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
    12         """
    13         pass
    14 
    15 
    16 
    17     def process_response(self, request, response, spider):
    18         """
    19         spider处理完成,返回时调用
    20         :param response:
    21         :param result:
    22         :param spider:
    23         :return: 
    24             Response 对象:转交给其他中间件process_response
    25             Request 对象:停止中间件,request会被重新调度下载
    26             raise IgnoreRequest 异常:调用Request.errback
    27         """
    28         print('response1')
    29         return response
    30 
    31     def process_exception(self, request, exception, spider):
    32         """
    33         当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
    34         :param response:
    35         :param exception:
    36         :param spider:
    37         :return: 
    38             None:继续交给后续中间件处理异常;
    39             Response对象:停止后续process_exception方法
    40             Request对象:停止中间件,request将会被重新调用下载
    41         """
    42         return None
    下载器中间件

    自定制命令

    • 在spiders同级创建任意目录,如:commands
    • 在其中创建 crawlall.py 文件 (此处文件名就是自定义的命令)
    from scrapy.commands import ScrapyCommand
        from scrapy.utils.project import get_project_settings
    
    
        class Command(ScrapyCommand):
    
            requires_project = True
    
            def syntax(self):
                return '[options]'
    
            def short_desc(self):
                return 'Runs all of the spiders'
    
            def run(self, args, opts):
                spider_list = self.crawler_process.spiders.list()
                for name in spider_list:
                    self.crawler_process.crawl(name, **opts.__dict__)
                self.crawler_process.start()
    

      

    • 在settings.py 中添加配置 COMMANDS_MODULE = '项目名称.目录名称'
    • 在项目目录执行命令:scrapy crawlall 

    自定义扩展

    自定义扩展时,利用信号在指定位置注册制定操作

    from scrapy import signals
    
    
    class MyExtension(object):
        def __init__(self, value):
            self.value = value
    
        @classmethod
        def from_crawler(cls, crawler):
            val = crawler.settings.getint('MMMM')
            ext = cls(val)
    
            crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)
            crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)
    
            return ext
    
        def spider_opened(self, spider):
            print('open')
    
        def spider_closed(self, spider):
            print('close')
    

      

    避免重复访问

    scrapy默认使用 scrapy.dupefilter.RFPDupeFilter 进行去重,相关配置有:

    DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
    DUPEFILTER_DEBUG = False
    JOBDIR = "保存范文记录的日志路径,如:/root/"  # 最终路径为 /root/requests.seen
    class RepeatUrl:
        def __init__(self):
            self.visited_url = set()
    
        @classmethod
        def from_settings(cls, settings):
            """
            初始化时,调用
            :param settings: 
            :return: 
            """
            return cls()
    
        def request_seen(self, request):
            """
            检测当前请求是否已经被访问过
            :param request: 
            :return: True表示已经访问过;False表示未访问过
            """
            if request.url in self.visited_url:
                return True
            self.visited_url.add(request.url)
            return False
    
        def open(self):
            """
            开始爬去请求时,调用
            :return: 
            """
            print('open replication')
    
        def close(self, reason):
            """
            结束爬虫爬取时,调用
            :param reason: 
            :return: 
            """
            print('close replication')
    
        def log(self, request, spider):
            """
            记录日志
            :param request: 
            :param spider: 
            :return: 
            """
            print('repeat', request.url)
    
    自定义URL去重操作
    

      其他

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for step8_king project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     http://doc.scrapy.org/en/latest/topics/settings.html
    #     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    #     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    
    # 1. 爬虫名称
    BOT_NAME = 'step8_king'
    
    # 2. 爬虫应用路径
    SPIDER_MODULES = ['step8_king.spiders']
    NEWSPIDER_MODULE = 'step8_king.spiders'
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    # 3. 客户端 user-agent请求头
    # USER_AGENT = 'step8_king (+http://www.yourdomain.com)'
    
    # Obey robots.txt rules
    # 4. 禁止爬虫配置
    # ROBOTSTXT_OBEY = False
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    # 5. 并发请求数
    # CONCURRENT_REQUESTS = 4
    
    # Configure a delay for requests for the same website (default: 0)
    # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    # 6. 延迟下载秒数
    # DOWNLOAD_DELAY = 2
    
    
    # The download delay setting will honor only one of:
    # 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名
    # CONCURRENT_REQUESTS_PER_DOMAIN = 2
    # 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP
    # CONCURRENT_REQUESTS_PER_IP = 3
    
    # Disable cookies (enabled by default)
    # 8. 是否支持cookie,cookiejar进行操作cookie
    # COOKIES_ENABLED = True
    # COOKIES_DEBUG = True
    
    # Disable Telnet Console (enabled by default)
    # 9. Telnet用于查看当前爬虫的信息,操作爬虫等...
    #    使用telnet ip port ,然后通过命令操作
    # TELNETCONSOLE_ENABLED = True
    # TELNETCONSOLE_HOST = '127.0.0.1'
    # TELNETCONSOLE_PORT = [6023,]
    
    
    # 10. 默认请求头
    # Override the default request headers:
    # DEFAULT_REQUEST_HEADERS = {
    #     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #     'Accept-Language': 'en',
    # }
    
    
    # Configure item pipelines
    # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
    # 11. 定义pipeline处理请求
    # ITEM_PIPELINES = {
    #    'step8_king.pipelines.JsonPipeline': 700,
    #    'step8_king.pipelines.FilePipeline': 500,
    # }
    
    
    
    # 12. 自定义扩展,基于信号进行调用
    # Enable or disable extensions
    # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
    # EXTENSIONS = {
    #     # 'step8_king.extensions.MyExtension': 500,
    # }
    
    
    # 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
    # DEPTH_LIMIT = 3
    
    # 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo
    
    # 后进先出,深度优先
    # DEPTH_PRIORITY = 0
    # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
    # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
    # 先进先出,广度优先
    
    # DEPTH_PRIORITY = 1
    # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
    # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'
    
    # 15. 调度器队列
    # SCHEDULER = 'scrapy.core.scheduler.Scheduler'
    # from scrapy.core.scheduler import Scheduler
    
    
    # 16. 访问URL去重
    # DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'
    
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
    
    """
    17. 自动限速算法
        from scrapy.contrib.throttle import AutoThrottle
        自动限速设置
        1. 获取最小延迟 DOWNLOAD_DELAY
        2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY
        3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY
        4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间
        5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCY
        target_delay = latency / self.target_concurrency
        new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间
        new_delay = max(target_delay, new_delay)
        new_delay = min(max(self.mindelay, new_delay), self.maxdelay)
        slot.delay = new_delay
    """
    
    # 开始自动限速
    # AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    # 初始下载延迟
    # AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    # 最大下载延迟
    # AUTOTHROTTLE_MAX_DELAY = 10
    # The average number of requests Scrapy should be sending in parallel to each remote server
    # 平均每秒并发数
    # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    
    # Enable showing throttling stats for every response received:
    # 是否显示
    # AUTOTHROTTLE_DEBUG = True
    
    # Enable and configure HTTP caching (disabled by default)
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    
    
    """
    18. 启用缓存
        目的用于将已经发送的请求或相应缓存下来,以便以后使用
        
        from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
        from scrapy.extensions.httpcache import DummyPolicy
        from scrapy.extensions.httpcache import FilesystemCacheStorage
    """
    # 是否启用缓存策略
    # HTTPCACHE_ENABLED = True
    
    # 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
    # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
    # 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
    # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"
    
    # 缓存超时时间
    # HTTPCACHE_EXPIRATION_SECS = 0
    
    # 缓存保存路径
    # HTTPCACHE_DIR = 'httpcache'
    
    # 缓存忽略的Http状态码
    # HTTPCACHE_IGNORE_HTTP_CODES = []
    
    # 缓存存储的插件
    # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    
    """
    19. 代理,需要在环境变量中设置
        from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware
        
        方式一:使用默认
            os.environ
            {
                http_proxy:http://root:woshiniba@192.168.11.11:9999/
                https_proxy:http://192.168.11.11:9999/
            }
        方式二:使用自定义下载中间件
        
        def to_bytes(text, encoding=None, errors='strict'):
            if isinstance(text, bytes):
                return text
            if not isinstance(text, six.string_types):
                raise TypeError('to_bytes must receive a unicode, str or bytes '
                                'object, got %s' % type(text).__name__)
            if encoding is None:
                encoding = 'utf-8'
            return text.encode(encoding, errors)
            
        class ProxyMiddleware(object):
            def process_request(self, request, spider):
                PROXIES = [
                    {'ip_port': '111.11.228.75:80', 'user_pass': ''},
                    {'ip_port': '120.198.243.22:80', 'user_pass': ''},
                    {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
                    {'ip_port': '101.71.27.120:80', 'user_pass': ''},
                    {'ip_port': '122.96.59.104:80', 'user_pass': ''},
                    {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
                ]
                proxy = random.choice(PROXIES)
                if proxy['user_pass'] is not None:
                    request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
                    encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))
                    request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)
                    print "**************ProxyMiddleware have pass************" + proxy['ip_port']
                else:
                    print "**************ProxyMiddleware no pass************" + proxy['ip_port']
                    request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
        
        DOWNLOADER_MIDDLEWARES = {
           'step8_king.middlewares.ProxyMiddleware': 500,
        }
        
    """
    
    """
    20. Https访问
        Https访问时有两种情况:
        1. 要爬取网站使用的可信任证书(默认支持)
            DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
            DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"
            
        2. 要爬取网站使用的自定义证书
            DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
            DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"
            
            # https.py
            from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
            from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)
            
            class MySSLFactory(ScrapyClientContextFactory):
                def getCertificateOptions(self):
                    from OpenSSL import crypto
                    v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())
                    v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())
                    return CertificateOptions(
                        privateKey=v1,  # pKey对象
                        certificate=v2,  # X509对象
                        verify=False,
                        method=getattr(self, 'method', getattr(self, '_ssl_method', None))
                    )
        其他:
            相关类
                scrapy.core.downloader.handlers.http.HttpDownloadHandler
                scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
                scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
            相关配置
                DOWNLOADER_HTTPCLIENTFACTORY
                DOWNLOADER_CLIENTCONTEXTFACTORY
    
    """
    
    
    
    """
    21. 爬虫中间件
        class SpiderMiddleware(object):
    
            def process_spider_input(self,response, spider):
                '''
                下载完成,执行,然后交给parse处理
                :param response: 
                :param spider: 
                :return: 
                '''
                pass
        
            def process_spider_output(self,response, result, spider):
                '''
                spider处理完成,返回时调用
                :param response:
                :param result:
                :param spider:
                :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
                '''
                return result
        
            def process_spider_exception(self,response, exception, spider):
                '''
                异常调用
                :param response:
                :param exception:
                :param spider:
                :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
                '''
                return None
        
        
            def process_start_requests(self,start_requests, spider):
                '''
                爬虫启动时调用
                :param start_requests:
                :param spider:
                :return: 包含 Request 对象的可迭代对象
                '''
                return start_requests
        
        内置爬虫中间件:
            'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
            'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
            'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
            'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
            'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,
    
    """
    # from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
    # Enable or disable spider middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    SPIDER_MIDDLEWARES = {
       # 'step8_king.middlewares.SpiderMiddleware': 543,
    }
    
    
    """
    22. 下载中间件
        class DownMiddleware1(object):
            def process_request(self, request, spider):
                '''
                请求需要被下载时,经过所有下载器中间件的process_request调用
                :param request:
                :param spider:
                :return:
                    None,继续后续中间件去下载;
                    Response对象,停止process_request的执行,开始执行process_response
                    Request对象,停止中间件的执行,将Request重新调度器
                    raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
                '''
                pass
        
        
        
            def process_response(self, request, response, spider):
                '''
                spider处理完成,返回时调用
                :param response:
                :param result:
                :param spider:
                :return:
                    Response 对象:转交给其他中间件process_response
                    Request 对象:停止中间件,request会被重新调度下载
                    raise IgnoreRequest 异常:调用Request.errback
                '''
                print('response1')
                return response
        
            def process_exception(self, request, exception, spider):
                '''
                当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
                :param response:
                :param exception:
                :param spider:
                :return:
                    None:继续交给后续中间件处理异常;
                    Response对象:停止后续process_exception方法
                    Request对象:停止中间件,request将会被重新调用下载
                '''
                return None
    
        
        默认下载中间件
        {
            'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
            'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
            'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
            'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
            'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
            'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
            'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
            'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
            'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
            'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
            'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
            'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
            'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
            'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
        }
    
    """
    # from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
    # Enable or disable downloader middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    # DOWNLOADER_MIDDLEWARES = {
    #    'step8_king.middlewares.DownMiddleware1': 100,
    #    'step8_king.middlewares.DownMiddleware2': 500,
    # }
    
    settings
    

      TinyScrapy

    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
    import types
    from twisted.internet import defer
    from twisted.web.client import getPage
    from twisted.internet import reactor
    
    
    
    class Request(object):
        def __init__(self, url, callback):
            self.url = url
            self.callback = callback
            self.priority = 0
    
    
    class HttpResponse(object):
        def __init__(self, content, request):
            self.content = content
            self.request = request
    
    
    class ChouTiSpider(object):
    
        def start_requests(self):
            url_list = ['http://www.cnblogs.com/', 'http://www.bing.com']
            for url in url_list:
                yield Request(url=url, callback=self.parse)
    
        def parse(self, response):
            print(response.request.url)
            # yield Request(url="http://www.baidu.com", callback=self.parse)
    
    
    
    
    from queue import Queue
    Q = Queue()
    
    
    class CallLaterOnce(object):
        def __init__(self, func, *a, **kw):
            self._func = func
            self._a = a
            self._kw = kw
            self._call = None
    
        def schedule(self, delay=0):
            if self._call is None:
                self._call = reactor.callLater(delay, self)
    
        def cancel(self):
            if self._call:
                self._call.cancel()
    
        def __call__(self):
            self._call = None
            return self._func(*self._a, **self._kw)
    
    
    class Engine(object):
        def __init__(self):
            self.nextcall = None
            self.crawlling = []
            self.max = 5
            self._closewait = None
    
        def get_response(self,content, request):
            response = HttpResponse(content, request)
            gen = request.callback(response)
            if isinstance(gen, types.GeneratorType):
                for req in gen:
                    req.priority = request.priority + 1
                    Q.put(req)
    
    
        def rm_crawlling(self,response,d):
            self.crawlling.remove(d)
    
        def _next_request(self,spider):
            if Q.qsize() == 0 and len(self.crawlling) == 0:
                self._closewait.callback(None)
    
            if len(self.crawlling) >= 5:
                return
            while len(self.crawlling) < 5:
                try:
                    req = Q.get(block=False)
                except Exception as e:
                    req = None
                if not req:
                    return
                d = getPage(req.url.encode('utf-8'))
                self.crawlling.append(d)
                d.addCallback(self.get_response, req)
                d.addCallback(self.rm_crawlling,d)
                d.addCallback(lambda _: self.nextcall.schedule())
    
    
        @defer.inlineCallbacks
        def crawl(self):
            spider = ChouTiSpider()
            start_requests = iter(spider.start_requests())
            flag = True
            while flag:
                try:
                    req = next(start_requests)
                    Q.put(req)
                except StopIteration as e:
                    flag = False
    
            self.nextcall = CallLaterOnce(self._next_request,spider)
            self.nextcall.schedule()
    
            self._closewait = defer.Deferred()
            yield self._closewait
    
        @defer.inlineCallbacks
        def pp(self):
            yield self.crawl()
    
    _active = set()
    obj = Engine()
    d = obj.crawl()
    _active.add(d)
    
    li = defer.DeferredList(_active)
    li.addBoth(lambda _,*a,**kw: reactor.stop())
    
    reactor.run()
    
    
    

      

  • 相关阅读:
    liunx挂载Ios镜像文件
    liunx下删除多个目录下的相同文件
    tomcat启动报错:Unable to complete the scan for annotations for web application [] due to a StackOverflow
    lr中controller 中scenario-> rendezvous显示灰色不可用
    liunx 下在指定文件夹下搜索字符
    python运行时提示:dot not in the path
    1.4 python 类型转换函数
    js字符串拼接
    自己写的时间轴空控件
    ios禁止页面下拉
  • 原文地址:https://www.cnblogs.com/jcwit/p/7804067.html
Copyright © 2011-2022 走看看