zoukankan      html  css  js  c++  java
  • CrawlSpider全栈+深度爬取阳光热线网

    - 图片懒加载
    - 应用到标签的伪属性,数据捕获的时候一定是基于伪属性进行!!!
    - ImagePileline:专门用作于二进制数据下载和持久化存储的管道类

    - CrawlSpider
    - 一种基于scrapy进行全站数据爬取的一种新的技术手段。
    - CrawlSpider就是Spider的一个子类
    - 连接提取器:LinkExtractor
    - 规则解析器:Rule
    - 使用流程:
    - 新建一个工程
    - cd 工程中
    - 新建一个爬虫文件:scrapy genspider -t crawl spiderName www.xxx.com
    1、创建项目scrapy startproject 项目名

    2、进入项目cd 项目名

    3、创建虫子scrapy genspider -t crawl sun www.xxx.com

    4、配置文件UA伪造、LOG_LEVEL、ROBOTSTXT_OBEY = False

    5、虫子里面指定正则规则allow=r'type=4&page=d+'

    6、执行虫子scrapy crawl sun

    7、follow=True/False是否爬取全部

    8、注意xpath表达式里面不能写tbody

    9、处理详情页信息

    10、items里面处理页码(标题+编号)、详情页里面内容

    11、虫子里面导入from sunCrawlPro1.items import Suncrawlpro1Item, Detail_item

    12、实例化并处理编号、标题、内容

    13、pipelines管道里面处理不同的类

    14、管道里面按num做匹配

    15、开启管道

    16、执行虫子scrapy crawl sun

    sun.py

    # -*- coding: utf-8 -*-
    import scrapy
    from scrapy.linkextractors import LinkExtractor
    from scrapy.spiders import CrawlSpider, Rule
    from sunCrawlPro1.items import Suncrawlpro1Item, Detail_item


    class SunSpider(CrawlSpider):
    name = 'sun'
    # allowed_domains = ['www.xxx.com']
    start_urls = ['http://wz.sun0769.com/index.php/question/questionType?type=4&page=']

    # 实例化了一个连接提取器对象
    # 作用:根据指定规则(allow=’正则表达式‘)进行指定连接的提取
    # 获取页码连接
    link = LinkExtractor(allow=r'type=4&page=d+')
    # 详情页链接:
    # link_detail = LinkExtractor(allow=r'question/202003/445164.shtml') # 原版
    link_detail = LinkExtractor(allow=r'question/d+/d+.shtml') # 正则后、加斜杠转义点.
    rules = (
    # 将link作用到了Rule构造方法的参数1中
    # 作用:将连接提取器提取到的连接进行请求发送且根据指定规则对请求到的数据进行数据解析
    Rule(link, callback='parse_item', follow=False),
    # follow=True:将连接提取器 继续作用到 连接提取器提取到的 连接 所对应的 页面中
    Rule(link_detail, callback='parse_detail'),
    )

    def parse_item(self, response):
    # 处理页码的信息
    tr_list = response.xpath('//*[@id="morelist"]/div/table[2]//tr/td/table//tr')
    # print(len(tr_list))
    for tr in tr_list:
    title = tr.xpath('./td[2]/a[2]/text()').extract_first()
    num = tr.xpath('./td[1]/text()').extract_first()
    # 实例化:
    item = Suncrawlpro1Item()
    item['title'] = title
    item['num'] = num
    # 提交:
    yield item

    def parse_detail(self, response):
    # 处理详情页的信息:
    content = response.xpath('/html/body/div[9]/table[2]//tr[1]/td/text()').extract_first() # 内容
    num = response.xpath('/html/body/div[9]/table[1]//tr/td[2]/span[2]/text()').extract_first() # 编号
    num = num.split(':')[-1]
    item = Detail_item()
    item['content'] = content
    item['num'] = num
    yield item

    items.py

    # -*- coding: utf-8 -*-

    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://docs.scrapy.org/en/latest/topics/items.html

    import scrapy


    class Suncrawlpro1Item(scrapy.Item):
    # define the fields for your item here like:
    title = scrapy.Field()
    num = scrapy.Field()


    class Detail_item(scrapy.Item):
    # define the fields for your item here like:
    content = scrapy.Field()
    num = scrapy.Field()

    pipelines.py

    # -*- coding: utf-8 -*-

    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


    class Suncrawlpro1Pipeline(object):
    def process_item(self, item, spider):
    if item.__class__.__name__ == 'Detail_item':
    # 处理不同的类数据:
    content = item['content']
    num = item['num']
    print(item)
    else:
    title = item['title']
    num = item['num']
    print(item)
    return item

    setting

    # -*- coding: utf-8 -*-

    # Scrapy settings for sunCrawlPro1 project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    # https://docs.scrapy.org/en/latest/topics/settings.html
    # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    # https://docs.scrapy.org/en/latest/topics/spider-middleware.html

    BOT_NAME = 'sunCrawlPro1'

    SPIDER_MODULES = ['sunCrawlPro1.spiders']
    NEWSPIDER_MODULE = 'sunCrawlPro1.spiders'

    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    # USER_AGENT = 'sunCrawlPro1 (+http://www.yourdomain.com)'

    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    LOG_LEVEL = 'ERROR'

    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'

    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    # CONCURRENT_REQUESTS = 32

    # Configure a delay for requests for the same website (default: 0)
    # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    # DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    # CONCURRENT_REQUESTS_PER_DOMAIN = 16
    # CONCURRENT_REQUESTS_PER_IP = 16

    # Disable cookies (enabled by default)
    # COOKIES_ENABLED = False

    # Disable Telnet Console (enabled by default)
    # TELNETCONSOLE_ENABLED = False

    # Override the default request headers:
    # DEFAULT_REQUEST_HEADERS = {
    # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    # 'Accept-Language': 'en',
    # }

    # Enable or disable spider middlewares
    # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    # SPIDER_MIDDLEWARES = {
    # 'sunCrawlPro1.middlewares.Suncrawlpro1SpiderMiddleware': 543,
    # }

    # Enable or disable downloader middlewares
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    # DOWNLOADER_MIDDLEWARES = {
    # 'sunCrawlPro1.middlewares.Suncrawlpro1DownloaderMiddleware': 543,
    # }

    # Enable or disable extensions
    # See https://docs.scrapy.org/en/latest/topics/extensions.html
    # EXTENSIONS = {
    # 'scrapy.extensions.telnet.TelnetConsole': None,
    # }

    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
    'sunCrawlPro1.pipelines.Suncrawlpro1Pipeline': 300,
    }

    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
    # AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    # AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    # AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    # AUTOTHROTTLE_DEBUG = False

    # Enable and configure HTTP caching (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    # HTTPCACHE_ENABLED = True
    # HTTPCACHE_EXPIRATION_SECS = 0
    # HTTPCACHE_DIR = 'httpcache'
    # HTTPCACHE_IGNORE_HTTP_CODES = []
    # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
  • 相关阅读:
    一点技巧
    题解G
    WA七次,疯了》》》》》OTZ
    就是过不了啊无奈。。。。。水题都过不了…………OTZ OTZ OTZ
    [IOS]使用UIScrollView和UIPageControl显示半透明帮助蒙板
    [System]几种同步方式
    [Objective C] Singleton类的一个模版
    [IOS] 自定义AlertView实现模态对话框
    [IOS] UIKit Animation
    [IOS]使用genstrings和NSLocalizedString实现App文本的本地化
  • 原文地址:https://www.cnblogs.com/zhang-da/p/12435685.html
Copyright © 2011-2022 走看看