zoukankan      html  css  js  c++  java
  • python之scrapy模块pipelines

    1、知识点

    """"
    pipelines使用:
        1、在spiders里面使用yield生成器
            list_li = response.xpath("//div[@class='swiper-wrapper']//li")
            #print(list_li)
            for li in list_li:
                #print(li.extract_first())
                item = { }
                item["name"] = li.xpath(".//h3/text()").extract_first()
                item["content"] = li.xpath(".//p[@class='teacherBrief']/text()").extract_first()
                #item["content"] = li.xpath(".//p[@class='teacherIntroduction']/text()").extract_first()
                #print(item)
                yield  item  #将数据传递道pipelines
                
        2、在pipelines中打印item
                class MyspiderPipeline(object):
                    """
                    #第一个管道,这个process_item方法名是不能改
                    """
                    def process_item(self, item, spider):
                        item["hello"] = "world"
                        print(item)
                        return item
                
                class MyspiderPipeline1(object):
                    """
                    #第二个管道
                    """
                    def process_item(self, item, spider):
                        print(item)
                        return item
    
        
        3、在settings文件添加pipelines的支持
            ITEM_PIPELINES = {
                #执行顺序为从小到大,即先执行300,然后在301
               'myspider.pipelines.MyspiderPipeline': 300,
                'myspider.pipelines.MyspiderPipeline1': 301,
            }
    """

    2、spider.py文件中通过

    yield  item  #将数据传递道pipelines.py中的item
    JulyeduSpider.py文件代码
    # -*- coding: utf-8 -*-
    import scrapy
    import  logging
    
    logger = logging.getLogger(__name__)
    class JulyeduSpider(scrapy.Spider):
        name = 'julyedu'
        allowed_domains = ['julyedu.com']
        start_urls = ['http://julyedu.com/']
        #这个parse方法名不能改
        def parse(self, response):
            """
            爬虫七月在线的导师名单
            :param response:
            :return:
            """
            list_li = response.xpath("//div[@class='swiper-wrapper']//li")
            #print(list_li)
            item = {}
            for li in list_li:
                item["name"] = li.xpath(".//h3/text()").extract_first()
                item["content"] = li.xpath(".//p[@class='teacherBrief']/text()").extract_first()
                #item["content"] = li.xpath(".//p[@class='teacherIntroduction']/text()").extract_first()
                #print(item)
                #将数据传递道pipelines,yield只接受Request,BaseItem,dict,None四种类型
                logger.warning(item) #打印日志
                yield  item

    2、修改pipelines.py文件,对其中的item可以操作

    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    
    class MyspiderPipeline(object):
        """
        第一个管道,这个process_item方法名是不能改
        """
        def process_item(self, item, spider):
            """
            针对不同的爬虫的数据处理
            :param item:spider 传过来的值
            :param spider: 传递过来spider的类
            :return:
            """
            if spider.name == "julyedu":
                #print(item)
                return item
            else:
                return item
    
    class MyspiderPipeline1(object):
        """
        第二个管道
        """
        def process_item(self, item, spider):
            #print(item)
            return item
    View Code

    3、对settings.py文件添加pipelines配置

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for myspider project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://doc.scrapy.org/en/latest/topics/settings.html
    #     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://doc.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'myspider'
    
    SPIDER_MODULES = ['myspider.spiders']
    NEWSPIDER_MODULE = 'myspider.spiders'
    
    LOG_LEVEL = 'WARNING'  #增加log日志
    LOG_FILE='./log.log'  #将log日志保存到文件中
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #USER_AGENT = 'myspider (+http://www.yourdomain.com)'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = True
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'myspider.middlewares.MyspiderSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'myspider.middlewares.MyspiderDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://doc.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
        #执行顺序为从小到大,即先执行300,然后在301
       'myspider.pipelines.MyspiderPipeline': 300,
        'myspider.pipelines.MyspiderPipeline1': 301,
    }
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://doc.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    View Code
  • 相关阅读:
    上传几张智能开关产品图片
    python+ueditor+七牛云存储整合
    Shell脚本检查memcache进程并自己主动重新启动
    Cocos2dx 3.x创建Layer的步骤
    HDU 5009 Paint Pearls (动态规划)
    (转)Spring4.2.5+Hibernate4.3.11+Struts2.3.24整合开发
    (转)Spring提供的CharacterEncoding和OpenSessionInView功能
    (转)为Spring集成的Hibernate配置二级缓存
    (转)Spring4.2.5+Hibernate4.3.11+Struts1.3.8集成方案二
    (转)Spring4.2.5+Hibernate4.3.11+Struts1.3.8集成方案一
  • 原文地址:https://www.cnblogs.com/ywjfx/p/11079499.html
Copyright © 2011-2022 走看看