zoukankan      html  css  js  c++  java
  • Title

    一、Scrapy 爬虫框架学习

    Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下:

    Scrapy主要包括了以下组件:

    • 引擎(Scrapy)

        用来处理整个系统的数据流处理, 触发事务(框架核心)

    • 调度器(Scheduler)

        用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址

    • 下载器(Downloader)

        用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)

    • 爬虫(Spiders)

        爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面

    • 项目管道(Pipeline)

        负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。

    • 下载器中间件(Downloader Middlewares)

        位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。

    • 爬虫中间件(Spider Middlewares)

        介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。

    • 调度中间件(Scheduler Middewares)

        介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。

      Scrapy运行流程大概如下:

        1.引擎从调度器中取出一个链接(URL)用于接下来的抓取

        2.引擎把URL封装成一个请求(Request)传给下载器

        3.下载器把资源下载下来,并封装成应答包(Response)

        4.爬虫解析Response

        5.解析出实体(Item),则交给实体管道进行进一步的处理

        6.解析出的是链接(URL),则把URL交给调度器等待抓取

    1.安装:
    Linux
        pip3 install scrapy
     
    Windows
      1 pip3 install wheel
      2 下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/ #找到关于twisted的部分
      3 说明:twisted的版本这里的cp35指的是python3.5版本;win_amd64是python的版本是64位(根据个人情况下载相应版本)
      4 进入下载目录,执行 pip3 install Twisted-17.1.0-cp35-cp35m-win_amd64.whl
      5 pip3 install scrapy
      6 尝试在cmd中进入python命令行后输入:import win32com 如果报错,请安装下面的pywin32 exe文件
      7 下载并安装pywin32:https://sourceforge.net/projects/pywin32/files/
      8 如果报未安装service_identity错误 那么去https://pypi.python.org/pypi/service_identity下载文件
      9 跳到刚才下载的文件路径,当前目录下打开cmd窗口 pip3 install service_identity-17.0.0-py2.py3-none-any.whl
    2.基本使用
    1.创建项目sp1
      scrapy startproject sp1
    
        sp1
          - sp1
            - spiders目录
              - middlewares.py	中间件
              - items.py		格式化
              - pipelines.py		持久化
              - settings.py		配置文件
              - scrapy.cfg 		配置
    
    2.创建爬虫
      cd sp1
      scrapy genspider example example.com # 示例
      scrapy genspider baidu   baidu.com   # 指定爬虫名和网址
    
    3 进入settings配置文件中把 ROBOTSTXT_OBEY = True 改成 False 表示不遵循网站约定的协议
    4.进入项目,执行爬虫   
      scrapy crawl baidu   
      scrapy crawl baidu --nolog
    3.目录
    project_name/
       scrapy.cfg
       project_name/
           __init__.py
           items.py
           pipelines.py
           settings.py
           spiders/
               __init__.py
               爬虫1.py
               爬虫2.py
               爬虫3.py

    文件说明:

    • scrapy.cfg  项目的主配置信息。(真正爬虫相关的配置信息在settings.py文件中)
    • items.py    设置数据存储模板,用于结构化数据,如:Django的Model
    • pipelines    数据处理行为,如:一般结构化的数据持久化
    • settings.py 配置文件,如:递归的层数、并发数,延迟下载等
    • spiders      爬虫目录,如:创建文件,编写爬虫规则

    注意:一般创建爬文件时,以网站域名命名

    # -*- coding: utf-8 -*-
    import scrapy
    # import sys,os
    # sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')
    
    class BaiduSpider(scrapy.Spider):
        name = 'baidu'
        allowed_domains = ['baidu.com']
        start_urls = ['http://baidu.com/']
        # 回调函数
        def parse(self, response):
            print(response.text)
    百度爬虫示例

    Windows编码问题:在爬虫中加入下面代码

    import sys,os
    sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')
    Windows编码解决
    4.选择器示例

    # -*- coding: utf-8 -*-
    import scrapy
    from scrapy.selector import HtmlXPathSelector,Selector
    from scrapy.http import Request
    
    class CnblogsSpider(scrapy.Spider):
        name = 'cnblogs'
        allowed_domains = ['cnblogs.com']
        start_urls = ['https://www.cnblogs.com/']
    
        def parse(self, response):
            hxs = Selector(response=response)
            user_list = hxs.xpath('//div[@class="post_item"]')
            for item in user_list:
                msg = item.xpath('div[@class="post_item_body"]/h3/a/text()').extract_first()
                print(msg)
    博客园文章标题
    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
    from scrapy.selector import Selector, HtmlXPathSelector
    from scrapy.http import HtmlResponse
    
    html = """<!DOCTYPE html>
    <html>
        <head lang="en">
            <meta charset="UTF-8">
            <title></title>
        </head>
        <body>
            <ul>
                <li class="item-"><a id='i1' href="link.html">first item</a></li>
                <li class="item-0"><a id='i2' href="llink.html">first item</a></li>
                <li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li>
            </ul>
            <div><a href="llink2.html">second item</a></div>
        </body>
    </html>
    """
    response = HtmlResponse(url='http://example.com', body=html, encoding='utf-8')
    # hxs = HtmlXPathSelector(response)
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a')
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[2]')
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[@id]')
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[@id="i1"]')
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[@href="link.html"][@id="i1"]')
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[contains(@href, "link")]')
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[starts-with(@href, "link")]')
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]')
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]/text()').extract()
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]/@href').extract()
    # print(hxs)
    # hxs = Selector(response=response).xpath('/html/body/ul/li/a/@href').extract()
    # print(hxs)
    # hxs = Selector(response=response).xpath('//body/ul/li/a/@href').extract_first()
    # print(hxs)
    
    # ul_list = Selector(response=response).xpath('//body/ul/li')
    # for item in ul_list:
          # v = item.xpath('./a/span')
    #     # 或
    #     v = item.xpath('a/span')
    #     # 或  (下面这个是错误的,找不到span标签)
    #     v = item.xpath('*/a/span')
    #     print(v)
    匹配规则
    5.自定义定义起始URL
    # -*- coding: utf-8 -*-
    import scrapy
    from scrapy.http import Request
    
    class ChoutiSpider(scrapy.Spider):
        name = 'chouti'
        allowed_domains = ['chouti.com']
        start_urls = ['http://chouti.com/']
    
        # 重写scrapy.Spider里边的start_requests方法,response返回后回调自定义的my_parse函数
        def start_requests(self):
            for url in self.start_urls:
                yield Request(url,dont_filter=True,callback=self.my_parse)
    
        def my_parse(self, response):
            print(response.text)
    View Code
    6.GET请求和POST请求
    一、POST请求,请求头
        requests.get(params={},headers={},cookies={})
        requests.post(params={},headers={},cookies={},data={},json={})
        
        url, 
        method='GET', 
        headers=None, 
        body=None,
        cookies=None,
        
        
    GET请求:
        url, 
        method='GET', 
        headers={}, 
        cookies={}, cookiejar
        
    POST请求:
        url, 
        method='GET', 
        headers={}, 
        cookies={}, cookiejar
        body=None,
            application/x-www-form-urlencoded; charset=UTF-8
                form_data = {
                    'user':'alex',
                    'pwd': 123
                }
                # 导入模块  url编码转化
                import urllib.parse
                data = urllib.parse.urlencode({'k1':'v1','k2':'v2'}) # k1=v1&k2=v2
                
                "phone=86155fa&password=asdf&oneMonth=1"   
                
            application/json; charset=UTF-8
                json.dumsp()
                
                "{k1:'v1','k2':'v2'}"   
        
    示例:
         Request(
            url='http://dig.chouti.com/login',
            method='POST',
            headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
            body='phone=8615131255089&password=pppppppp&oneMonth=1',
            callback=self.check_login
        )
    
    cookie:
            Request(
                url='http://dig.chouti.com/login',
                method='POST',
                headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
                body='phone=8615131255089&password=pppppppp&oneMonth=1',
                callback=self.check_login
            )
    View Code
    7.登录抽屉并点赞示例
    # -*- coding: utf-8 -*-
    import scrapy
    from scrapy.http import Request
    from scrapy.selector import Selector
    
    class ChoutiSpider(scrapy.Spider):
        name = 'chouti'
        allowed_domains = ['chouti.com']
        start_urls = ['http://chouti.com/']
        cookie_dict = {}
        # 重写scrapy.Spider里边的start_requests方法,response返回后回调自定义的my_parse函数
        def start_requests(self):
            for url in self.start_urls:
                yield Request(url,dont_filter=True,callback=self.my_parse)
    
        def my_parse(self,response):
            """
            拿到首页和未授权的cookie进行登录,成功后回调parse2函数
            :param response: response.text 抽屉首页所有内容
            :return: 
            """
            from scrapy.http.cookies import CookieJar
            cookie_jar = CookieJar() # 对象,中封装了 cookies
            cookie_jar.extract_cookies(response, response.request) # 去响应中获取cookies
    
            for k, v in cookie_jar._cookies.items():
                for i, j in v.items():
                    for m, n in j.items():
                        self.cookie_dict[m] = n.value
            post_dict = {
                'phone': '8615156755089',
                'password': 'xxxxxwwwww',
                'oneMonth': 1,
            }
            import urllib.parse
    
            # 目的:发送POST进行登录
            yield Request(
                url="http://dig.chouti.com/login",
                method='POST',
                cookies=self.cookie_dict,
                body=urllib.parse.urlencode(post_dict),
                headers={'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8'},
                callback=self.parse2
            )
    
        def parse2(self,response):
            """
            向登录成功后的首页发送GET请求页面内容,成功后回调parse3函数
            :param response: 登录成功后的首页内容
            :return: 
            """
            print(response.text)
            # 获取新闻列表
            yield Request(url='http://dig.chouti.com/',cookies=self.cookie_dict,callback=self.parse3)
    
        def parse3(self,response):
            """
            1.遍历第一页的所有点赞id,拼凑完整url后向该地址发POST请求,成功后回调parse4打印返回内容
            2.拿到所有的页码,拼凑完整的url后,递归执行本parse3函数,依次拿到相应页码的点赞id,后执行点赞
            :param response: 
            :return: 
            """
            # 找div,class=part2, 获取share-linkid
            hxs = Selector(response)
            link_id_list = hxs.xpath('//div[@class="part2"]/@share-linkid').extract()
            print(link_id_list)
            for link_id in link_id_list:
                # 获取每一个ID去点赞
                base_url = "http://dig.chouti.com/link/vote?linksId=%s" %(link_id,)
                yield Request(url=base_url,method="POST",cookies=self.cookie_dict,callback=self.parse4)
    
            page_list = hxs.xpath('//div[@id="dig_lcpage"]//a/@href').extract()
            for page in page_list:
                #http://dig.chouti.com/ /all/hot/recent/2
                page_url = "http://dig.chouti.com%s" %(page,)
                yield Request(url=page_url,method='GET',callback=self.parse3)
                
        def parse4(self, response):
            print(response.text)
    View Code

      注意:settings.py中设置DEPTH_LIMIT = 1来指定“递归”的层数。

    8.格式化处理

      上面的例子都是在parse方法中直接处理。如果对于想要获取更多的数据处理,则可以利用Scrapy的items将数据格式化,然后统一交由pipelines来处理。

    # -*- coding: utf-8 -*-
    import scrapy
    from scrapy.http import Request
    from scrapy.selector import Selector
    
    class JianDanSpider(scrapy.Spider):
        name = 'jiandan'
        allowed_domains = ['jandan.net']
        start_urls = ['http://jandan.net/']
    
        def start_requests(self):
            for url in self.start_urls:
                yield Request(url, dont_filter=True,callback=self.parse1)
        def parse1(self,response):
            # response.text 首页所有内容
            hxs = Selector(response)
            a_list = hxs.xpath('//div[@class="indexs"]/h2')
            for tag in a_list:
                url = tag.xpath('./a/@href').extract_first()
                text = tag.xpath('./a/text()').extract_first()
                from ..items import Sp2jiandanItem
                yield Sp2jiandanItem(url=url,text=text)
    jiandan.py

       我们写好爬虫jiandan.py内容后要把内容传到items.py中

    from ..items import Sp2jiandanItem
    yield Sp2jiandanItem(url=url,text=text)
    

      在items.py可以自定义类Sp2jiandanItem继承scrapy.Item ,如下:

    import scrapy
    class Sp2jiandanItem(scrapy.Item):
        # define the fields for your item here like:
        # name = scrapy.Field()
        url = scrapy.Field()
        text = scrapy.Field()
    items.py

      然后我们需要去配置文件中注册一下我们在pipelines.py中注册的类JiandanPipeline 

    # 注册多个类时,后面的数字越小优先级越高,一般用来指定执行顺序
    ITEM_PIPELINES = {
        'sp2.pipelines.JiandanPipeline': 100,
    }
    settings.py

      之后我们就可以在pipelines.py中的类JiandanPipeline 进行相关操作了,对在items中的字段数据进行相关操作

    class JiandanPipeline(object):
        def __init__(self):
            self.f = None
    
        def process_item(self, item, spider):
            """
    
            :param item:  爬虫中yield回来的对象
            :param spider: 爬虫对象 obj = JianDanSpider()
            :return:
            """
            if spider.name == 'jiadnan':
                pass
            print(item)
            self.f.write('....')
            # 将item传递给下一个pipeline的process_item方法
            return item
            # from scrapy.exceptions import DropItem
            # raise DropItem()  下一个pipeline的process_item方法不在执行
    
        @classmethod
        def from_crawler(cls, crawler):
            """
            初始化时候,用于创建pipeline对象
            :param crawler:
            :return:
            """
            # val = crawler.settings.get('MMMM')
            print('执行pipeline的from_crawler,进行实例化对象')
            return cls()
    
        def open_spider(self,spider):
            """
            爬虫开始执行时,调用
            :param spider:
            :return:
            """
            print('打开爬虫')
            self.f = open('a.log','a+')
    
        def close_spider(self,spider):
            """
            爬虫关闭时,被调用
            :param spider:
            :return:
            """
            self.f.close()
    pipelines.py

       处理逻辑都是在pipelines.py的相关类中完成的。

    9. 避免重复访问

       scrapy默认使用 scrapy.dupefilter.RFPDupeFilter 进行去重,相关配置有:

    DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
    DUPEFILTER_DEBUG = False
    JOBDIR = "保存范文记录的日志路径,如:/root/"  # 最终路径为 /root/requests.seen
    class RepeatUrl:
        def __init__(self):
            self.visited_url = set()
    
        @classmethod
        def from_settings(cls, settings):
            """
            初始化时,调用
            :param settings: 
            :return: 
            """
            return cls()
    
        def request_seen(self, request):
            """
            检测当前请求是否已经被访问过
            :param request: 
            :return: True表示已经访问过;False表示未访问过
            """
            if request.url in self.visited_url:
                return True
            self.visited_url.add(request.url)
            return False
    
        def open(self):
            """
            开始爬去请求时,调用
            :return: 
            """
            print('open replication')
    
        def close(self, reason):
            """
            结束爬虫爬取时,调用
            :param reason: 
            :return: 
            """
            print('close replication')
    
        def log(self, request, spider):
            """
            记录日志
            :param request: 
            :param spider: 
            :return: 
            """
            print('repeat', request.url)
    rep.py

       只要在settings.py中设置 DUPEFILTER_CLASS = 'sp2.rep.RepeatUrl'

    10.自定义扩展

      自定义扩展时,利用信号在指定位置注册制定操作

    from scrapy import signals
    
    class MyExtension(object):
        def __init__(self, value):
            self.value = value
    
        @classmethod
        def from_crawler(cls, crawler):
            val = crawler.settings.getint('MMMM')
            ext = cls(val)
    
            # 在scrapy中注册信号: spider_opened
            crawler.signals.connect(ext.opened, signal=signals.spider_opened)
            # 在scrapy中注册信号: spider_closed
            crawler.signals.connect(ext.closed, signal=signals.spider_closed)
    
            return ext
    
        def opened(self, spider):
            print('open')
    
        def closed(self, spider):
            print('close')
    extends.py

      在配置文件settings.py中

    EXTENSIONS = {
      'scrapy.extensions.extends.MyExtension': None,
    }
    
    11.中间件
    class SpiderMiddleware(object):
    
        def process_spider_input(self,response, spider):
            """
            下载完成,执行,然后交给parse处理
            :param response: 
            :param spider: 
            :return: 
            """
            pass
    
        def process_spider_output(self,response, result, spider):
            """
            spider处理完成,返回时调用
            :param response:
            :param result:
            :param spider:
            :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
            """
            return result
    
        def process_spider_exception(self,response, exception, spider):
            """
            异常调用
            :param response:
            :param exception:
            :param spider:
            :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
            """
            return None
    
        def process_start_requests(self,start_requests, spider):
            """
            爬虫启动时调用
            :param start_requests:
            :param spider:
            :return: 包含 Request 对象的可迭代对象
            """
            return start_requests
    
    """        
    内置爬虫中间件:
        'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
        'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
        'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
        'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
        'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,
    
    
    from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
    Enable or disable spider middlewares
    See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    SPIDER_MIDDLEWARES = {
        'step8_king.middlewares.SpiderMiddleware': 543,
    }        
    """        
    爬虫中间件
    下载中间件
    class DownMiddleware1(object):
        def process_request(self, request, spider):
            '''
            请求需要被下载时,经过所有下载器中间件的process_request调用
            :param request:
            :param spider:
            :return:
                None,继续后续中间件去下载;
                Response对象,停止process_request的执行,开始执行process_response
                Request对象,停止中间件的执行,将Request重新调度器
                raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
            '''
            pass
    
        def process_response(self, request, response, spider):
            '''
            spider处理完成,返回时调用
            :param response:
            :param result:
            :param spider:
            :return:
                Response 对象:转交给其他中间件process_response
                Request 对象:停止中间件,request会被重新调度下载
                raise IgnoreRequest 异常:调用Request.errback
            '''
            print('response1')
            return response
    
        def process_exception(self, request, exception, spider):
            '''
            当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
            :param response:
            :param exception:
            :param spider:
            :return:
                None:继续交给后续中间件处理异常;
                Response对象:停止后续process_exception方法
                Request对象:停止中间件,request将会被重新调用下载
            '''
            return None
        
    默认下载中间件
        {
            'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
            'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
            'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
            'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
            'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
            'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
            'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
            'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
            'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
            'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
            'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
            'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
            'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
            'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
        }
    
    """
    from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
    Enable or disable downloader middlewares
    See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    DOWNLOADER_MIDDLEWARES = {
       'step8_king.middlewares.DownMiddleware1': 100,
       'step8_king.middlewares.DownMiddleware2': 500,
    }        
    """        
    下载器中间件
    12.配置文件解读
    # -*- coding: utf-8 -*-
    
    # Scrapy settings for step8_king project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     http://doc.scrapy.org/en/latest/topics/settings.html
    #     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    #     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    
    # 1. 爬虫名称
    BOT_NAME = 'step8_king'
    
    # 2. 爬虫应用路径
    SPIDER_MODULES = ['step8_king.spiders']
    NEWSPIDER_MODULE = 'step8_king.spiders'
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    # 3. 客户端 user-agent请求头
    # USER_AGENT = 'step8_king (+http://www.yourdomain.com)'
    
    # Obey robots.txt rules
    # 4. 禁止爬虫配置
    # ROBOTSTXT_OBEY = False
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    # 5. 并发请求数
    # CONCURRENT_REQUESTS = 4
    
    # Configure a delay for requests for the same website (default: 0)
    # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    # 6. 延迟下载秒数
    # DOWNLOAD_DELAY = 2
    
    
    # The download delay setting will honor only one of:
    # 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名
    # CONCURRENT_REQUESTS_PER_DOMAIN = 2
    # 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP
    # CONCURRENT_REQUESTS_PER_IP = 3
    
    # Disable cookies (enabled by default)
    # 8. 是否支持cookie,cookiejar进行操作cookie
    # COOKIES_ENABLED = True
    # COOKIES_DEBUG = True
    
    # Disable Telnet Console (enabled by default)
    # 9. Telnet用于查看当前爬虫的信息,操作爬虫等...
    #    使用telnet ip port ,然后通过命令操作
    # TELNETCONSOLE_ENABLED = True
    # TELNETCONSOLE_HOST = '127.0.0.1'
    # TELNETCONSOLE_PORT = [6023,]
    
    
    # 10. 默认请求头
    # Override the default request headers:
    # DEFAULT_REQUEST_HEADERS = {
    #     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #     'Accept-Language': 'en',
    # }
    
    
    # Configure item pipelines
    # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
    # 11. 定义pipeline处理请求
    # ITEM_PIPELINES = {
    #    'step8_king.pipelines.JsonPipeline': 700,
    #    'step8_king.pipelines.FilePipeline': 500,
    # }
    
    
    
    # 12. 自定义扩展,基于信号进行调用
    # Enable or disable extensions
    # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
    # EXTENSIONS = {
    #     # 'step8_king.extensions.MyExtension': 500,
    # }
    
    
    # 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
    # DEPTH_LIMIT = 3
    
    # 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo
    
    # 后进先出,深度优先
    # DEPTH_PRIORITY = 0
    # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
    # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
    # 先进先出,广度优先
    
    # DEPTH_PRIORITY = 1
    # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
    # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'
    
    # 15. 调度器队列
    # SCHEDULER = 'scrapy.core.scheduler.Scheduler'
    # from scrapy.core.scheduler import Scheduler
    
    
    # 16. 访问URL去重
    # DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'
    
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
    
    """
    17. 自动限速算法
        from scrapy.contrib.throttle import AutoThrottle
        自动限速设置
        1. 获取最小延迟 DOWNLOAD_DELAY
        2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY
        3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY
        4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间
        5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCY
        target_delay = latency / self.target_concurrency
        new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间
        new_delay = max(target_delay, new_delay)
        new_delay = min(max(self.mindelay, new_delay), self.maxdelay)
        slot.delay = new_delay
    """
    
    # 开始自动限速
    # AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    # 初始下载延迟
    # AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    # 最大下载延迟
    # AUTOTHROTTLE_MAX_DELAY = 10
    # The average number of requests Scrapy should be sending in parallel to each remote server
    # 平均每秒并发数
    # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    
    # Enable showing throttling stats for every response received:
    # 是否显示
    # AUTOTHROTTLE_DEBUG = True
    
    # Enable and configure HTTP caching (disabled by default)
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    
    
    """
    18. 启用缓存
        目的用于将已经发送的请求或相应缓存下来,以便以后使用
        
        from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
        from scrapy.extensions.httpcache import DummyPolicy
        from scrapy.extensions.httpcache import FilesystemCacheStorage
    """
    # 是否启用缓存策略
    # HTTPCACHE_ENABLED = True
    
    # 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
    # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
    # 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
    # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"
    
    # 缓存超时时间
    # HTTPCACHE_EXPIRATION_SECS = 0
    
    # 缓存保存路径
    # HTTPCACHE_DIR = 'httpcache'
    
    # 缓存忽略的Http状态码
    # HTTPCACHE_IGNORE_HTTP_CODES = []
    
    # 缓存存储的插件
    # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    
    """
    19. 代理,需要在环境变量中设置
        from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware
        
        方式一:使用默认
            os.environ
            {
                http_proxy:http://root:woshiniba@192.168.11.11:9999/
                https_proxy:http://192.168.11.11:9999/
            }
        方式二:使用自定义下载中间件
        
        def to_bytes(text, encoding=None, errors='strict'):
            if isinstance(text, bytes):
                return text
            if not isinstance(text, six.string_types):
                raise TypeError('to_bytes must receive a unicode, str or bytes '
                                'object, got %s' % type(text).__name__)
            if encoding is None:
                encoding = 'utf-8'
            return text.encode(encoding, errors)
            
        class ProxyMiddleware(object):
            def process_request(self, request, spider):
                PROXIES = [
                    {'ip_port': '111.11.228.75:80', 'user_pass': ''},
                    {'ip_port': '120.198.243.22:80', 'user_pass': ''},
                    {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
                    {'ip_port': '101.71.27.120:80', 'user_pass': ''},
                    {'ip_port': '122.96.59.104:80', 'user_pass': ''},
                    {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
                ]
                proxy = random.choice(PROXIES)
                if proxy['user_pass'] is not None:
                    request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
                    encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))
                    request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)
                    print "**************ProxyMiddleware have pass************" + proxy['ip_port']
                else:
                    print "**************ProxyMiddleware no pass************" + proxy['ip_port']
                    request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
        
        DOWNLOADER_MIDDLEWARES = {
           'step8_king.middlewares.ProxyMiddleware': 500,
        }
        
    """
    
    """
    20. Https访问
        Https访问时有两种情况:
        1. 要爬取网站使用的可信任证书(默认支持)
            DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
            DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"
            
        2. 要爬取网站使用的自定义证书
            DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
            DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"
            
            # https.py
            from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
            from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)
            
            class MySSLFactory(ScrapyClientContextFactory):
                def getCertificateOptions(self):
                    from OpenSSL import crypto
                    v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())
                    v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())
                    return CertificateOptions(
                        privateKey=v1,  # pKey对象
                        certificate=v2,  # X509对象
                        verify=False,
                        method=getattr(self, 'method', getattr(self, '_ssl_method', None))
                    )
        其他:
            相关类
                scrapy.core.downloader.handlers.http.HttpDownloadHandler
                scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
                scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
            相关配置
                DOWNLOADER_HTTPCLIENTFACTORY
                DOWNLOADER_CLIENTCONTEXTFACTORY
    
    """
    
    
    
    """
    21. 爬虫中间件
        class SpiderMiddleware(object):
    
            def process_spider_input(self,response, spider):
                '''
                下载完成,执行,然后交给parse处理
                :param response: 
                :param spider: 
                :return: 
                '''
                pass
        
            def process_spider_output(self,response, result, spider):
                '''
                spider处理完成,返回时调用
                :param response:
                :param result:
                :param spider:
                :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
                '''
                return result
        
            def process_spider_exception(self,response, exception, spider):
                '''
                异常调用
                :param response:
                :param exception:
                :param spider:
                :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
                '''
                return None
        
        
            def process_start_requests(self,start_requests, spider):
                '''
                爬虫启动时调用
                :param start_requests:
                :param spider:
                :return: 包含 Request 对象的可迭代对象
                '''
                return start_requests
        
        内置爬虫中间件:
            'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
            'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
            'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
            'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
            'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,
    
    """
    # from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
    # Enable or disable spider middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    SPIDER_MIDDLEWARES = {
       # 'step8_king.middlewares.SpiderMiddleware': 543,
    }
    
    
    """
    22. 下载中间件
        class DownMiddleware1(object):
            def process_request(self, request, spider):
                '''
                请求需要被下载时,经过所有下载器中间件的process_request调用
                :param request:
                :param spider:
                :return:
                    None,继续后续中间件去下载;
                    Response对象,停止process_request的执行,开始执行process_response
                    Request对象,停止中间件的执行,将Request重新调度器
                    raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
                '''
                pass
        
        
        
            def process_response(self, request, response, spider):
                '''
                spider处理完成,返回时调用
                :param response:
                :param result:
                :param spider:
                :return:
                    Response 对象:转交给其他中间件process_response
                    Request 对象:停止中间件,request会被重新调度下载
                    raise IgnoreRequest 异常:调用Request.errback
                '''
                print('response1')
                return response
        
            def process_exception(self, request, exception, spider):
                '''
                当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
                :param response:
                :param exception:
                :param spider:
                :return:
                    None:继续交给后续中间件处理异常;
                    Response对象:停止后续process_exception方法
                    Request对象:停止中间件,request将会被重新调用下载
                '''
                return None
    
        
        默认下载中间件
        {
            'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
            'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
            'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
            'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
            'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
            'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
            'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
            'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
            'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
            'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
            'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
            'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
            'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
            'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
        }
    
    """
    # from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
    # Enable or disable downloader middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    # DOWNLOADER_MIDDLEWARES = {
    #    'step8_king.middlewares.DownMiddleware1': 100,
    #    'step8_king.middlewares.DownMiddleware2': 500,
    # }
    
    settings
    View Code

     转载:scrapy框架

  • 相关阅读:
    https://github.com/cykl/infoqscraper/
    C# 笔记
    json.org
    python html parse
    doxygen
    review board
    ruunlevel debian
    连接REDIS
    composer
    php需要注意的地方
  • 原文地址:https://www.cnblogs.com/guotianbao/p/7966524.html
Copyright © 2011-2022 走看看