zoukankan      html  css  js  c++  java
  • scrapy中命令介绍

    一、显示全部命令

    1、在项目外输入 scrapy -h 

    (scrapy_env) frange@ubuntu:~/workspace/spider$ scrapy -h
    Scrapy 1.5.1 - no active project
    
    Usage:
    scrapy <command> [options] [args]
    
    Available commands:
    bench Run quick benchmark test
    fetch Fetch a URL using the Scrapy downloader
    genspider Generate new spider using pre-defined templates
    runspider Run a self-contained spider (without creating a project)
    settings Get settings values
    shell Interactive scraping console
    startproject Create new project
    version Print Scrapy version
    view Open URL in browser, as seen by Scrapy
    
    [ more ] More commands available when run from project directory
    
    Use "scrapy <command> -h" to see more info about a command
    scrapy -h

     2、在项目内输入scrapy -h

    (scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago/spider_lago$ scrapy -h
    Scrapy 1.5.1 - project: spider_lago
    
    Usage:
      scrapy <command> [options] [args]
    
    Available commands:
      bench         Run quick benchmark test
      check         Check spider contracts
      crawl         Run a spider
      edit          Edit spider
      fetch         Fetch a URL using the Scrapy downloader
      genspider     Generate new spider using pre-defined templates
      list          List available spiders
      parse         Parse URL (using its spider) and print the results
      runspider     Run a self-contained spider (without creating a project)
      settings      Get settings values
      shell         Interactive scraping console
      startproject  Create new project
      version       Print Scrapy version
      view          Open URL in browser, as seen by Scrapy
    
    Use "scrapy <command> -h" to see more info about a command
    scrapy -h

    二、单个命令介绍

    1、bench

      对网站进行快速爬取测试,用于检测本地硬件的性能

    (scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago$ scrapy bench http://baidu.com
    2018-08-14 02:12:01 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: spider_lago)
    2018-08-14 02:12:01 [scrapy.utils.log] INFO: Versions: lxml 4.2.3.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.5.2 (default, Nov 23 2017, 16:37:01) - [GCC 5.4.0 20160609], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h  27 Mar 2018), cryptography 2.3, Platform Linux-4.15.0-29-generic-x86_64-with-Ubuntu-16.04-xenial
    2018-08-14 02:12:02 [scrapy.crawler] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'spider_lago.spiders', 'LOG_LEVEL': 'INFO', 'BOT_NAME': 'spider_lago', 'LOGSTATS_INTERVAL': 1, 'SPIDER_MODULES': ['spider_lago.spiders'], 'CLOSESPIDER_TIMEOUT': 10}
    2018-08-14 02:12:03 [scrapy.middleware] INFO: Enabled extensions:
    ['scrapy.extensions.telnet.TelnetConsole',
     'scrapy.extensions.corestats.CoreStats',
     'scrapy.extensions.logstats.LogStats',
     'scrapy.extensions.memusage.MemoryUsage',
     'scrapy.extensions.closespider.CloseSpider']
    2018-08-14 02:12:03 [scrapy.middleware] INFO: Enabled downloader middlewares:
    ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
     'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
     'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
     'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
     'scrapy.downloadermiddlewares.retry.RetryMiddleware',
     'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
     'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
     'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
     'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
     'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
     'scrapy.downloadermiddlewares.stats.DownloaderStats']
    2018-08-14 02:12:03 [scrapy.middleware] INFO: Enabled spider middlewares:
    ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
     'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
     'scrapy.spidermiddlewares.referer.RefererMiddleware',
     'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
     'scrapy.spidermiddlewares.depth.DepthMiddleware']
    2018-08-14 02:12:03 [scrapy.middleware] INFO: Enabled item pipelines:
    []
    2018-08-14 02:12:03 [scrapy.core.engine] INFO: Spider opened
    2018-08-14 02:12:03 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:04 [scrapy.extensions.logstats] INFO: Crawled 53 pages (at 3180 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:05 [scrapy.extensions.logstats] INFO: Crawled 117 pages (at 3840 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:06 [scrapy.extensions.logstats] INFO: Crawled 173 pages (at 3360 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:07 [scrapy.extensions.logstats] INFO: Crawled 229 pages (at 3360 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:08 [scrapy.extensions.logstats] INFO: Crawled 269 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:09 [scrapy.extensions.logstats] INFO: Crawled 325 pages (at 3360 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:10 [scrapy.extensions.logstats] INFO: Crawled 365 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:11 [scrapy.extensions.logstats] INFO: Crawled 373 pages (at 480 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:12 [scrapy.extensions.logstats] INFO: Crawled 421 pages (at 2880 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:13 [scrapy.extensions.logstats] INFO: Crawled 461 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
    2018-08-14 02:12:13 [scrapy.core.engine] INFO: Closing spider (closespider_timeout)
    2018-08-14 02:12:14 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 199285,
     'downloader/request_count': 477,
     'downloader/request_method_count/GET': 477,
     'downloader/response_bytes': 1332429,
     'downloader/response_count': 477,
     'downloader/response_status_count/200': 477,
     'finish_reason': 'closespider_timeout',
     'finish_time': datetime.datetime(2018, 8, 14, 9, 12, 14, 485240),
     'log_count/INFO': 17,
     'memusage/max': 53321728,
     'memusage/startup': 53321728,
     'request_depth_max': 17,
     'response_received_count': 477,
     'scheduler/dequeued': 477,
     'scheduler/dequeued/memory': 477,
     'scheduler/enqueued': 9541,
     'scheduler/enqueued/memory': 9541,
     'start_time': datetime.datetime(2018, 8, 14, 9, 12, 3, 671156)}
    2018-08-14 02:12:14 [scrapy.core.engine] INFO: Spider closed (closespider_timeout)
    scrapy bench http://baidu.com

     2、fetch

      显示爬取过程

    (scrapy_env) frange@ubuntu:~/workspace/spider$ scrapy fetch --nolog http://baidu.com
    <!DOCTYPE html>
    <!--STATUS OK--><html> <head><meta http-equiv=content-type content=text/html;charset=utf-8><meta http-equiv=X-UA-Compatible content=IE=Edge><meta content=always name=referrer><link rel=stylesheet type=text/css href=http://s1.bdstatic.com/r/www/cache/bdorz/baidu.min.css><title>百度一下,你就知道</title></head> <body link=#0000cc> <div id=wrapper> <div id=head> <div class=head_wrapper> <div class=s_form> <div class=s_form_wrapper> <div id=lg> <img hidefocus=true src=//www.baidu.com/img/bd_logo1.png width=270 height=129> </div> <form id=form name=f action=//www.baidu.com/s class=fm> <input type=hidden name=bdorz_come value=1> <input type=hidden name=ie value=utf-8> <input type=hidden name=f value=8> <input type=hidden name=rsv_bp value=1> <input type=hidden name=rsv_idx value=1> <input type=hidden name=tn value=baidu><span class="bg s_ipt_wr"><input id=kw name=wd class=s_ipt value maxlength=255 autocomplete=off autofocus></span><span class="bg s_btn_wr"><input type=submit id=su value=百度一下 class="bg s_btn"></span> </form> </div> </div> <div id=u1> <a href=http://news.baidu.com name=tj_trnews class=mnav>新闻</a> <a href=http://www.hao123.com name=tj_trhao123 class=mnav>hao123</a> <a href=http://map.baidu.com name=tj_trmap class=mnav>地图</a> <a href=http://v.baidu.com name=tj_trvideo class=mnav>视频</a> <a href=http://tieba.baidu.com name=tj_trtieba class=mnav>贴吧</a> <noscript> <a href=http://www.baidu.com/bdorz/login.gif?login&amp;tpl=mn&amp;u=http%3A%2F%2Fwww.baidu.com%2f%3fbdorz_come%3d1 name=tj_login class=lb>登录</a> </noscript> <script>document.write('<a href="http://www.baidu.com/bdorz/login.gif?login&tpl=mn&u='+ encodeURIComponent(window.location.href+ (window.location.search === "" ? "?" : "&")+ "bdorz_come=1")+ '" name="tj_login" class="lb">登录</a>');</script> <a href=//www.baidu.com/more/ name=tj_briicon class=bri style="display: block;">更多产品</a> </div> </div> </div> <div id=ftCon> <div id=ftConw> <p id=lh> <a href=http://home.baidu.com>关于百度</a> <a href=http://ir.baidu.com>About Baidu</a> </p> <p id=cp>&copy;2017&nbsp;Baidu&nbsp;<a href=http://www.baidu.com/duty/>使用百度前必读</a>&nbsp; <a href=http://jianyi.baidu.com/ class=cp-feedback>意见反馈</a>&nbsp;京ICP证030173号&nbsp; <img src=//www.baidu.com/img/gs.gif> </p> </div> </div> </div> </body> </html>
    爬取百度

    3、genspider

      创建一个爬虫项目,需要在爬虫项目内运行

    4、runspider

      直接运行一个爬虫文件

    5、settings

      查看scrapy对应的配置信息

    (scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago/spider_lago$ scrapy settings --get ROBOTSTXT_OBEY
    False

    6、shell

      启动scrapy交互终端

    (scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago/spider_lago$ scrapy shell
    2018-08-14 03:01:30 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: spider_lago)
    2018-08-14 03:01:30 [scrapy.utils.log] INFO: Versions: lxml 4.2.3.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.5.2 (default, Nov 23 2017, 16:37:01) - [GCC 5.4.0 20160609], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h  27 Mar 2018), cryptography 2.3, Platform Linux-4.15.0-29-generic-x86_64-with-Ubuntu-16.04-xenial
    2018-08-14 03:01:30 [scrapy.crawler] INFO: Overridden settings: {'LOGSTATS_INTERVAL': 0, 'SPIDER_MODULES': ['spider_lago.spiders'], 'NEWSPIDER_MODULE': 'spider_lago.spiders', 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'BOT_NAME': 'spider_lago'}
    2018-08-14 03:01:30 [scrapy.middleware] INFO: Enabled extensions:
    ['scrapy.extensions.corestats.CoreStats',
     'scrapy.extensions.telnet.TelnetConsole',
     'scrapy.extensions.memusage.MemoryUsage']
    2018-08-14 03:01:30 [scrapy.middleware] INFO: Enabled downloader middlewares:
    ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
     'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
     'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
     'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
     'scrapy.downloadermiddlewares.retry.RetryMiddleware',
     'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
     'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
     'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
     'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
     'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
     'scrapy.downloadermiddlewares.stats.DownloaderStats']
    2018-08-14 03:01:30 [scrapy.middleware] INFO: Enabled spider middlewares:
    ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
     'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
     'scrapy.spidermiddlewares.referer.RefererMiddleware',
     'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
     'scrapy.spidermiddlewares.depth.DepthMiddleware']
    2018-08-14 03:01:30 [scrapy.middleware] INFO: Enabled item pipelines:
    []
    2018-08-14 03:01:30 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
    [s] Available Scrapy objects:
    [s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
    [s]   crawler    <scrapy.crawler.Crawler object at 0x7f0815c87e80>
    [s]   item       {}
    [s]   settings   <scrapy.settings.Settings object at 0x7f08147e6d68>
    [s] Useful shortcuts:
    [s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
    [s]   fetch(req)                  Fetch a scrapy.Request and update local objects 
    [s]   shelp()           Shell help (print this help)
    [s]   view(response)    View response in a browser
    >>> 
    scrapy shell

    7、startproject

      创键爬虫

    8、version

      scrapy版本信息

    9、view

      实现下载某个网页并用浏览器查看

    三、项目内命令

    由于scrapy全局命令可以在非爬虫项目中使用也可以在项目中使用,所以,在项目命令中也会有全局命令。

    1、genspider

      在爬虫项目目录中,基于爬虫模板直接创建一个scrapy爬虫文件

      下面的代码为查看模板

    (scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago/spider_lago$ scrapy genspider -l
    Available templates:
      basic
      crawl
      csvfeed
      xmlfeed
    View Code

      查看模板内容

    (scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago/spider_lago$ scrapy genspider -d csvfeed
    # -*- coding: utf-8 -*-
    from scrapy.spiders import CSVFeedSpider
    
    
    class $classname(CSVFeedSpider):
        name = '$name'
        allowed_domains = ['$domain']
        start_urls = ['http://$domain/feed.csv']
        # headers = ['id', 'name', 'description', 'image_link']
        # delimiter = '	'
    
        # Do any adaptations you need here
        #def adapt_response(self, response):
        #    return response
    
        def parse_row(self, response, row):
            i = {}
            #i['url'] = row['url']
            #i['name'] = row['name']
            #i['description'] = row['description']
            return i
    View Code

      创建一个爬虫

    scrapy genspider -t basic weisuen baidu.com

    2、check

      实现对某个爬虫文件进行合同检查

    scrapy check 爬虫名

    3、crawl

      启动某个爬虫

    scrapy crawl 爬虫名 --loglevel=INFO

    4、list

      列出当前可使用的爬虫文件

    5 、edit

      用编辑器打开爬虫文件进行编辑

    6、parse

      获取指定的url网址,如果没指定爬虫文件使用默认的爬虫文件和默认的处理函数进行处理

  • 相关阅读:
    学生3D作品---李自立---熊猫(Blender 2.8)
    学生3D作品---李自立---台式电脑加椅子(Blender 2.8)
    Tweak Kernel’s Task Scheduler to Boost Performance on Android [Part 2]
    Tweak Kernel’s Task Scheduler to Boost Performance on Android [Part 1]
    Binder transactions in the bowels of the Linux Kernel
    操作系统原理——实验——作业命名格式
    Linux 引入自动化测试平台 KernelCI
    进程可创建的最大连接数
    Docker—PaaS—微服务
    3D学院人才培养的金字塔模型-张同光-20190924---发言稿
  • 原文地址:https://www.cnblogs.com/Frange/p/9476029.html
Copyright © 2011-2022 走看看