zoukankan      html  css  js  c++  java
  • 16-爬虫之scrapy框架手动请求发送实现全站数据爬取03

    scrapy的手动请求发送实现全站数据爬取

    • yield scrapy.Reques(url,callback) 发起的get请求
      • callback指定解析函数用于解析数据
    • yield scrapy.FormRequest(url,callback,formdata)发起的post请求
      • formdata:字典,请求参数
    • 为什么start_urls列表中的url会被自动进行get请求的发送?
      • 因为列表中的url其实是被start_requests这个父类方法实现的get请求
    # 父类方法:这个是该方法的原始实现
    def start_requests(self):
        for u in self.start_urls:
            yield scrapy.Request(url=url,callback=self.parse)
    
    • 如何将start_urls中的url默认进行post请求发送?
    # 重写父类方法默认进行post请求发送
    def start_requests(self):
        for u in self.start_urls:
            yield scrapy.FormRequest(url=url,callback=self.parse)
    

    开始

    创建一个爬虫工程:scrapy startproject proName
    进入工程目录创建爬虫源文件:scrapy genspider spiderName www.xxx.com
    执行工程:scrapy crawl spiderName
    在这里插入图片描述

    配置pipelines.py文件

    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    
    
    # useful for handling different item types with a single interface
    from itemadapter import ItemAdapter
    
    
    class GpcPipeline:
        def process_item(self, item, spider):
            print(item)
            return item
    
    

    配置items.py文件

    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://docs.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class GpcItem(scrapy.Item):
        # define the fields for your item here like:
        # name = scrapy.Field()
        title = scrapy.Field()
        content = scrapy.Field()
    
    
    

    配置settings.py配置文件

    # Scrapy settings for gpc project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://docs.scrapy.org/en/latest/topics/settings.html
    #     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'gpc'
    
    SPIDER_MODULES = ['gpc.spiders']
    NEWSPIDER_MODULE = 'gpc.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #设置UA伪装
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36'
    LOG_LEVEL = 'ERROR' #指定类型日志的输出(只输出错误信息)
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'gpc.middlewares.GpcSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'gpc.middlewares.GpcDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://docs.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
       'gpc.pipelines.GpcPipeline': 300,
    }
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    

    爬虫源文件 una.py

    import scrapy
    from gpc.items import GpcItem
    
    class UnaSpider(scrapy.Spider):
        name = 'una'
        #allowed_domains = ['www.baidu.com']
        start_urls = ['https://duanziwang.com/category/经典段子/1/']
    
        #定义通用url模板
        url = 'https://duanziwang.com/category/经典段子/%d/'
        page_num = 2
    
        # 将段子网中所有页码对应的数据进行爬取
        def parse(self, response):
            # 数据解析名称和内容
            article_list = response.xpath('/html/body/section/div/div/main/article')
            for article in article_list:
                # 我们可以看见解析出来的内容不是字符串数据,说明和etree中xpath使用方式不同
                # xpath返回的列表中存储是Selector对象,说明我们想要的字符串数据被存储在了该对象的data属性中
                # extract()就是将data属性值取出
                # 调用extract_first() 将列表中第一个列表元素表示的Selector对象中的data值取出
                title = article.xpath("./div[1]/h1/a/text()").extract_first()
                content = article.xpath("./div[2]/p/text()").extract_first()
                # 实例化一个item类型的对象,将解析到的数据存储到该对象中
                item = GpcItem()
                # 不能使用item. 来调用数据
                item['title'] = title
                item['content'] = content
                yield item
            if self.page_num < 5:#结束递归的条件
    
                new_url = format(self.url%self.page_num)#其他页码对应的完整url
                self.page_num += 1
                #对新的页码对应的url发起请求(手动发起get请求)
                yield scrapy.Request(url=new_url,callback=self.parse)#递归请求回调提交给parse进行解析
    
    

    结果展示

    在这里插入图片描述

  • 相关阅读:
    MYSQL InnoDB二级索引存储主键值而不是存储行指针的优点与缺点
    公众号 苹果端点击事件委托不起作用而安卓可以
    php emoji表情转换
    PHP 获取网页所有链接
    node 一行一行的读取文件
    AsyncJS 异步流程控制DEMO详细介绍
    node.js 获取异步方法里面数据 的方式
    利用blob 加密防下载
    html css 3D轮播图
    transform和transition组合动画错误问题
  • 原文地址:https://www.cnblogs.com/gemoumou/p/13635329.html
Copyright © 2011-2022 走看看