zoukankan      html  css  js  c++  java
  • 10 给予scrapy-redis的分布式爬虫

    1. 安装

      pip install scrapy_redis

    2. 爬虫文件

      scrapy-redis提供了两种爬虫

      

    from scrapy_redis.spiders import RedisSpider
    
    
    class MySpider(RedisSpider):
        """Spider that reads urls from redis queue (myspider:start_urls)."""
        name = 'myspider_redis'
        redis_key = 'myspider:start_urls'
    
        def __init__(self, *args, **kwargs):
            # Dynamically define the allowed domains list.
            domain = kwargs.pop('domain', '')
            self.allowed_domains = filter(None, domain.split(','))
            super(MySpider, self).__init__(*args, **kwargs)
    
        def parse(self, response):
            return {
                'name': response.css('title::text').extract_first(),
                'url': response.url,
            }
    from scrapy.spiders import Rule
    from scrapy.linkextractors import LinkExtractor
    
    from scrapy_redis.spiders import RedisCrawlSpider
    
    
    class MyCrawler(RedisCrawlSpider):
        """Spider that reads urls from redis queue (myspider:start_urls)."""
        name = 'mycrawler_redis'
        redis_key = 'mycrawler:start_urls'
    
        rules = (
            # follow all links
            Rule(LinkExtractor(), callback='parse_page', follow=True),
        )
    
        def __init__(self, *args, **kwargs):
            # Dynamically define the allowed domains list.
            domain = kwargs.pop('domain', '')
            self.allowed_domains = filter(None, domain.split(','))
            super(MyCrawler, self).__init__(*args, **kwargs)
    
        def parse_page(self, response):
            return {
                'name': response.css('title::text').extract_first(),
                'url': response.url,
            }

    3. settings配置

      使用redis_spider组件中封装好的管道  ITEM_PIPELINES = { 'scrapy_redis.pipelines.RedisPipeline': 400 }  

      # 使用scrapy-redis组件的去重队列 DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

      # 使用scrapy-redis组件自己的调度器 SCHEDULER = "scrapy_redis.scheduler.Scheduler"

      # 是否允许暂停 SCHEDULER_PERSIST = True

      

      REDIS_HOST = 'redis服务的ip地址'

      REDIS_PORT = 6379

      REDIS_ENCODING = ‘utf-8’

      REDIS_PARAMS = {‘password’:’123456’}

    4. 开启redis-server 和 redis-cli

    5. scrapy runspider myspider.py 开启分布式爬虫

    6. 向调度器中扔入一个起始url, lpush redis_key url

      

  • 相关阅读:
    练习三
    练习四
    练习二
    软件生命周期
    练习一 第六题
    练习一 第五题
    练习一 第四题
    练习一 第三题
    练习一 第二题
    AngularJs模块
  • 原文地址:https://www.cnblogs.com/zhangjian0092/p/11966398.html
Copyright © 2011-2022 走看看