zoukankan      html  css  js  c++  java
  • 19-爬虫之scrapy框架大文件下载06

    大文件下载
    创建一个爬虫工程:scrapy startproject proName
    进入工程目录创建爬虫源文件:scrapy genspider spiderName www.xxx.com
    执行工程:scrapy crawl spiderName

    大文件数据是在管道中请求到的

    下载管道类是scrapy封装好的直接调用即可:

    from scrapy.pipelines.images import ImagesPipeline # 该管道提供数据下载功能(图片视频音频皆可使用该类)

    重写管道类的三个方法:

    def get_media_requests

    • 对图片地址发起请求

    def file_path

    • 返回图片名称即可

    def item_completed

    • 返回item,将其返回给下一个即将被执行的管道类

    在配置文件中添加 IMAGES_STORE

    • IMAGES_STORE=‘dirname’

    在这里插入图片描述

    img.py 爬虫源文件

    import scrapy
    from imgPro.items import ImgproItem
    
    class ImgSpider(scrapy.Spider):
        name = 'img'
        #allowed_domains = ['www.xxx.com']
        start_urls = ['http://www.521609.com/daxuemeinv/']
    
        def parse(self, response):
            li_list = response.xpath('//*[@id="content"]/div[2]/div[2]/ul/li')
            for li in li_list:
                img_src = 'http://www.521609.com'+li.xpath('./a[1]/img/@src').extract_first() #图片url
                img_name = li.xpath('./a[1]/img/@alt').extract_first()+'.jpg'#图片名字
    
                item = ImgproItem()
                item['name'] = img_name
                item['src'] = img_src
    
                yield item
    
    
    
    
    
    

    items.py

    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://docs.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class ImgproItem(scrapy.Item):
        # define the fields for your item here like:
        # name = scrapy.Field()
        name = scrapy.Field()
        src = scrapy.Field()
    
    

    settings.py

    # Scrapy settings for imgPro project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://docs.scrapy.org/en/latest/topics/settings.html
    #     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'imgPro'
    
    SPIDER_MODULES = ['imgPro.spiders']
    NEWSPIDER_MODULE = 'imgPro.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    LOG_LEVEL = "ERROR"
    
    IMAGES_STORE = 'imgLibs' #定义存储文件夹(没有的话会自动给创建)
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'imgPro.middlewares.ImgproSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'imgPro.middlewares.ImgproDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://docs.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
       'imgPro.pipelines.ImgsproPipeline': 300,
    }
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    

    pipelines.py

    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    
    
    # useful for handling different item types with a single interface
    # from itemadapter import ItemAdapter
    
    # 该默认管道无法帮我们进行数据请求,因此该管道我们就不使用
    # class ImgproPipeline:
    #     def process_item(self, item, spider):
    #         return item
    
    # 管道需要接受item中的图片地址和名称,然后在管道中请求到图片的数据对其进行持久化存储
    from scrapy.pipelines.images import ImagesPipeline # 该管道提供数据下载功能(图片视频音频皆可使用该类)
    import scrapy
    class ImgsproPipeline(ImagesPipeline):
        # 根据图片地址发起请求
        def get_media_requests(self, item, info):
            print(item)
            yield scrapy.Request(url=item['src'],meta={'item':item})
    
        # 返回图片名称即可
        def file_path(self, request, response=None, info=None):
            # 通过request获取meta
            item = request.meta['item']
            filePath = item['name']
            return filePath #返回图片名称
    
        # 我们将item传递给下一个即将被执行的管道类
        def item_completed(self, results, item, info):
            return item
    
    

    结果展示

    在这里插入图片描述
    在这里插入图片描述

    scrapy的settings.py配置文件介绍

    # -*- coding: utf-8 -*-
    
    
    # Scrapy settings for demo1 project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     http://doc.scrapy.org/en/latest/topics/settings.html
    #     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    #     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    
    
    BOT_NAME = ''   #Scrapy项目的名字,这将用来构造默认 User-Agent,同时也用来log,当您使用 startproject 命令创建项目时其也被自动赋值。
    
    
    SPIDER_MODULES = ['']   #Scrapy搜索spider的模块列表 默认: [xxx.spiders]
    NEWSPIDER_MODULE = ''   #使用 genspider 命令创建新spider的模块。默认: 'xxx.spiders'
    
    
    
    
    #爬取的默认User-Agent,除非被覆盖
    #USER_AGENT =‘’
    
    #如果启用,Scrapy将会采用 robots.txt策略
    ROBOTSTXT_OBEY = True
    
    
    #Scrapy downloader 并发请求(concurrent requests)的最大值,默认: 16
    #CONCURRENT_REQUESTS = 32
    
    
    #为同一网站的请求配置延迟(默认值:0)
    # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs  
    #DOWNLOAD_DELAY = 3   下载器在下载同一个网站下一个页面前需要等待的时间,该选项可以用来限制爬取速度,减轻服务器压力。同时也支持小数:0.25 以秒为单位
    
    
        
    #下载延迟设置只有一个有效
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16   对单个网站进行并发请求的最大值。
    #CONCURRENT_REQUESTS_PER_IP = 16       对单个IP进行并发请求的最大值。如果非0,则忽略 CONCURRENT_REQUESTS_PER_DOMAIN 设定,使用该设定。 也就是说,并发限制将针对IP,而不是网站。该设定也影响 DOWNLOAD_DELAY: 如果 CONCURRENT_REQUESTS_PER_IP 非0,下载延迟应用在IP而不是网站上。
    
    
    #禁用Cookie(默认情况下启用)
    #COOKIES_ENABLED = False
    
    
    #禁用Telnet控制台(默认启用)
    #TELNETCONSOLE_ENABLED = False 
    
    
    #覆盖默认请求标头:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    
    #启用或禁用蜘蛛中间件
    # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'demo1.middlewares.Demo1SpiderMiddleware': 543,
    #}
    
    
    #启用或禁用下载器中间件
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'demo1.middlewares.MyCustomDownloaderMiddleware': 543,
    #}
    
    
    #启用或禁用扩展程序
    # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    
    #配置项目管道
    # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
    #ITEM_PIPELINES = {
    #    'demo1.pipelines.Demo1Pipeline': 300,
    #}
    
    
    #启用和配置AutoThrottle扩展(默认情况下禁用)
    # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    
    
    #初始下载延迟
    #AUTOTHROTTLE_START_DELAY = 5
    
    
    #在高延迟的情况下设置的最大下载延迟
    #AUTOTHROTTLE_MAX_DELAY = 60
    
    
    
    
    #Scrapy请求的平均数量应该并行发送每个远程服务器
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    
    
    #启用显示所收到的每个响应的调节统计信息:
    #AUTOTHROTTLE_DEBUG = False
    
    
    #启用和配置HTTP缓存(默认情况下禁用)
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
  • 相关阅读:
    SSL 1010——方格取数
    SSL 1558——科技庄园
    SSL 2295——暗黑破坏神
    SSL 2294——打包
    SSL 2293——暗黑游戏
    SSL 2305——竞赛总分
    SSL 1072——砝码称重
    SSL 2291——分组背包
    SSL 2290——潜水员
    SSL 2301——混合背包
  • 原文地址:https://www.cnblogs.com/gemoumou/p/13635326.html
Copyright © 2011-2022 走看看