zoukankan      html  css  js  c++  java
  • 16-增量式爬虫

    增量式爬虫

      增量式爬虫,顾名思义。就是当网站更新新的内容的时候,能够将新内容储存下来,而不是将原来的数据又储存下来。例如电影网站每隔一段时间就更新新的电影,小说网站每天更新新的小说。

      因此增量式爬虫就是发送请求之前,判断这个url是否爬取过,解析出数据判断是否爬取过:

              1、对爬取过的url进行储存

              2、对爬取过的数据进行储存(数据进行哈希储存(节约资源))

              3、判断储存的数据是否已经存在

    去重方法:

        1)将爬取中的url进行储存,储存在redis的set中。下次爬取数据时,首先对即将要发送请求的url对储存在url的set中做判断,如果存在,就不发送请求,否则继续发送请求。

        2)爬取到的内容进行唯一身份的标识,然后讲这个唯一标识储存在redis的set中。当下次爬取到网页数据的时候,在进行持久化存储之前,首先可以判断该数据的身份标识是否存在在redis的set中。如果存在,就不储存。如果不存在,就储存身份标识以及数据。

    1、url去重

    spiders/Movie.py

    # -*- coding: utf-8 -*-
    import scrapy
    from scrapy.linkextractors import LinkExtractor
    from scrapy.spiders import CrawlSpider, Rule
    #导入redis 储存数据
    from redis import Redis
    from Addcrawl.items import AddcrawlItem
    
    class MovieSpider(CrawlSpider):
        name = 'Movie'
        #注销掉允许作用域
        # allowed_domains = ['www.xxx.com']
        start_urls = ['https://www.4567tv.tv/frim/index7.html']
    
        rules = (
            Rule(LinkExtractor(allow=r'/frim/index7-d+.html'), callback='parse_item', follow=True),
    
        )
        #创建redis对象,不能放在parse_item这样每调用方法就会被实例化,
        # 因此应该放在只被实例化一次的地方
        conn = Redis(host="127.0.0.1",port=6379)
        def parse_item(self, response):
            li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')
    
            for li in li_list:
                #获取详情页的url
                detail_url ="http://www.4567tv.tv"+(li.xpath('./div/a/@href').extract_first())
                #将详情页的url储存在redis的set中
                ret = self.conn.sadd("urls",detail_url)
                #经验证,此数据插入成功会返回1,否则返回0
                if ret == 1:
                    print("这条url没有采集,可以进行采集!")
                    yield scrapy.Request(url=detail_url,callback=self.parse_detail)
                else:
                    print("已经爬取过,不能在爬取")
        def parse_detail(self,response):
            page = response.xpath('/html/body/div[1]/div/div/div/div[2]')
            item = AddcrawlItem()
            item["title"] = page.xpath('./h1/text()').extract_first()
            content = page.xpath('./p[5]/span[2]/text()').extract_first()
            item["content"] = "".join(content)
            yield item

    items.py

    # -*- coding: utf-8 -*-
    
    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://docs.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class AddcrawlItem(scrapy.Item):
        # define the fields for your item here like:
        title = scrapy.Field()
        content = scrapy.Field()

    pipelines.py

    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    
    
    class AddcrawlPipeline(object):
        def process_item(self, item, spider):
            dic = {
                "title":item["title"],
                "content":item["content"],
            }
            print(dic)
            #将数据储存入redis中
            spider.conn.lpush("moviedatas",dic)
            return item

    settings.py

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for Addcrawl project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://docs.scrapy.org/en/latest/topics/settings.html
    #     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'Addcrawl'
    
    SPIDER_MODULES = ['Addcrawl.spiders']
    NEWSPIDER_MODULE = 'Addcrawl.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #USER_AGENT = 'Addcrawl (+http://www.yourdomain.com)'
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    LOG_LEVEL = "ERROR"
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'Addcrawl.middlewares.AddcrawlSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'Addcrawl.middlewares.AddcrawlDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://docs.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
       'Addcrawl.pipelines.AddcrawlPipeline': 300,
    }
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    settings.py

    2、数据去重

    # -*- coding: utf-8 -*-
    import scrapy
    from scrapy.linkextractors import LinkExtractor
    from scrapy.spiders import CrawlSpider, Rule
    #导入redis 储存数据
    from redis import Redis
    from Addcrawl.items import AddcrawlItem
    #导入哈希
    import hashlib
    class MovieSpider(CrawlSpider):
        name = 'Movie'
        #注销掉允许作用域
        # allowed_domains = ['www.xxx.com']
        start_urls = ['https://www.4567tv.tv/frim/index7.html']
    
        rules = (
            Rule(LinkExtractor(allow=r'/frim/index7-d+.html'), callback='parse_item', follow=True),
    
        )
        #创建redis对象,不能放在parse_item这样每调用方法就会被实例化,
        # 因此应该放在只被实例化一次的地方
        conn = Redis(host="127.0.0.1",port=6379)
        def parse_item(self, response):
            li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')
            for li in li_list:
                #获取详情页的url
                detail_url ="http://www.4567tv.tv"+(li.xpath('./div/a/@href').extract_first())
                yield scrapy.Request(url=detail_url,callback=self.parse_detail)
    
        def parse_detail(self,response):
            page = response.xpath('/html/body/div[1]/div/div/div/div[2]')
            item = AddcrawlItem()
            item["title"] = page.xpath('./h1/text()').extract_first()
            content = page.xpath('./p[5]/span[2]/text()').extract_first()
            item["content"] = "".join(content)
            #将解析到的数据生成唯一身份标识的进行redis储存
            source = item["title"]+item["content"]
            source_id = hashlib.sha256(source.encode()).hexdigest()
            #将解析内容的唯一标识储存到redis的data_id中
            ret = self.conn.sadd("data_id",source_id)
            if ret == 1:
                print("这条数据没有爬取过!!!")
                yield item
            else:
                print("已经爬取过啦!!!")
    爬虫文件
    # -*- coding: utf-8 -*-
    
    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://docs.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class AddcrawlItem(scrapy.Item):
        # define the fields for your item here like:
        title = scrapy.Field()
        content = scrapy.Field()
    items.py
    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    
    
    class AddcrawlPipeline(object):
        def process_item(self, item, spider):
            dic = {
                "title":item["title"],
                "content":item["content"],
            }
            print(dic)
            #将数据储存入redis中
            spider.conn.lpush("moviedatas2",dic)
            return item
    pipelines.py
    # -*- coding: utf-8 -*-
    
    # Scrapy settings for Addcrawl project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://docs.scrapy.org/en/latest/topics/settings.html
    #     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'Addcrawl'
    
    SPIDER_MODULES = ['Addcrawl.spiders']
    NEWSPIDER_MODULE = 'Addcrawl.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #USER_AGENT = 'Addcrawl (+http://www.yourdomain.com)'
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    LOG_LEVEL = "ERROR"
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'Addcrawl.middlewares.AddcrawlSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'Addcrawl.middlewares.AddcrawlDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://docs.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
       'Addcrawl.pipelines.AddcrawlPipeline': 300,
    }
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    settings.py
    总结:增量式爬虫就是对数据去重,两种方法,一种对爬取过的url去重,另一种就是对储存过的数据去重。因此都是现将数据或者url单独储存起来,然后再将插入数据的返回值作为判断依据。来爬取最新的数据。
     
  • 相关阅读:
    c++的正则库 pcre
    http://alibench.com
    常用正则表达式,来自新浪微博的js
    mysql的反向
    字母汉子组合的验证码,包括实现看不清换一个的功能
    什么是Ajax
    做“时间日志”
    计划比目标还要重要!
    成功座右铭一
    建立组织
  • 原文地址:https://www.cnblogs.com/lishuntao/p/11647602.html
Copyright © 2011-2022 走看看