zoukankan      html  css  js  c++  java
  • twisted.internet.error.DNSLookupError: DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.解决办法

    C:Userswuzhi_000Desktop	utorial>scrapy shell 'http://quotes.toscrape.com'
    2016-11-02 14:59:11 [scrapy] INFO: Scrapy 1.2.1 started (bot: tutorial)
    2016-11-02 14:59:11 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial', 'LOGSTATS_INTERVAL': 0}
    2016-11-02 14:59:11 [scrapy] INFO: Enabled extensions:
    ['scrapy.extensions.telnet.TelnetConsole',
     'scrapy.extensions.corestats.CoreStats']
    2016-11-02 14:59:12 [scrapy] INFO: Enabled downloader middlewares:
    ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
     'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
     'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
     'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
     'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
     'scrapy.downloadermiddlewares.retry.RetryMiddleware',
     'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
     'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
     'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
     'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
     'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
     'scrapy.downloadermiddlewares.stats.DownloaderStats']
    2016-11-02 14:59:12 [scrapy] INFO: Enabled spider middlewares:
    ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
     'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
     'scrapy.spidermiddlewares.referer.RefererMiddleware',
     'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
     'scrapy.spidermiddlewares.depth.DepthMiddleware']
    2016-11-02 14:59:12 [scrapy] INFO: Enabled item pipelines:
    []
    2016-11-02 14:59:12 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
    2016-11-02 14:59:12 [scrapy] INFO: Spider opened
    2016-11-02 14:59:12 [scrapy] DEBUG: Retrying <GET http://'http:/robots.txt> (failed 1 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
    2016-11-02 14:59:12 [scrapy] DEBUG: Retrying <GET http://'http:/robots.txt> (failed 2 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
    2016-11-02 14:59:12 [scrapy] DEBUG: Gave up retrying <GET http://'http:/robots.txt> (failed 3 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
    2016-11-02 14:59:12 [scrapy] ERROR: Error downloading <GET http://'http:/robots.txt>: DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
    DNSLookupError: DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
    2016-11-02 14:59:12 [scrapy] DEBUG: Retrying <GET http://'http://quotes.toscrape.com'> (failed 1 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
    2016-11-02 14:59:12 [scrapy] DEBUG: Retrying <GET http://'http://quotes.toscrape.com'> (failed 2 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
    2016-11-02 14:59:12 [scrapy] DEBUG: Gave up retrying <GET http://'http://quotes.toscrape.com'> (failed 3 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
    Traceback (most recent call last):
      File "C:Python27Scriptsscrapy-script.py", line 9, in <module>
        load_entry_point('scrapy==1.2.1', 'console_scripts', 'scrapy')()
      File "c:python27libsite-packagesscrapy-1.2.1-py2.7.eggscrapycmdline.py", line 142, in execute
        _run_print_help(parser, _run_command, cmd, args, opts)
      File "c:python27libsite-packagesscrapy-1.2.1-py2.7.eggscrapycmdline.py", line 88, in _run_print_help
        func(*a, **kw)
      File "c:python27libsite-packagesscrapy-1.2.1-py2.7.eggscrapycmdline.py", line 149, in _run_command
        cmd.run(args, opts)
      File "c:python27libsite-packagesscrapy-1.2.1-py2.7.eggscrapycommandsshell.py", line 71, in run
        shell.start(url=url)
      File "c:python27libsite-packagesscrapy-1.2.1-py2.7.eggscrapyshell.py", line 47, in start
        self.fetch(url, spider)
      File "c:python27libsite-packagesscrapy-1.2.1-py2.7.eggscrapyshell.py", line 112, in fetch
        reactor, self._schedule, request, spider)
      File "c:python27libsite-packages	wistedinternet	hreads.py", line 122, in blockingCallFromThread
        result.raiseException()
      File "<string>", line 2, in raiseException
    twisted.internet.error.DNSLookupError: DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.

    解决办法:

    将单引号换成双引号即可:

    scrapy shell "http://quotes.toscrape.com"

    C:Userswuzhi_000Desktop	utorial>scrapy shell "http://quotes.toscrape.com"
    2016-11-02 15:07:29 [scrapy] INFO: Scrapy 1.2.1 started (bot: tutorial)
    2016-11-02 15:07:29 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial', 'LOGSTATS_INTERVAL': 0}
    2016-11-02 15:07:29 [scrapy] INFO: Enabled extensions:
    ['scrapy.extensions.telnet.TelnetConsole',
     'scrapy.extensions.corestats.CoreStats']
    2016-11-02 15:07:29 [scrapy] INFO: Enabled downloader middlewares:
    ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
     'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
     'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
     'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
     'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
     'scrapy.downloadermiddlewares.retry.RetryMiddleware',
     'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
     'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
     'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
     'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
     'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
     'scrapy.downloadermiddlewares.stats.DownloaderStats']
    2016-11-02 15:07:29 [scrapy] INFO: Enabled spider middlewares:
    ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
     'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
     'scrapy.spidermiddlewares.referer.RefererMiddleware',
     'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
     'scrapy.spidermiddlewares.depth.DepthMiddleware']
    2016-11-02 15:07:29 [scrapy] INFO: Enabled item pipelines:
    []
    2016-11-02 15:07:29 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
    2016-11-02 15:07:29 [scrapy] INFO: Spider opened
    2016-11-02 15:07:31 [scrapy] DEBUG: Crawled (404) <GET http://quotes.toscrape.com/robots.txt> (referer: None)
    2016-11-02 15:07:31 [scrapy] DEBUG: Crawled (200) <GET http://quotes.toscrape.com> (referer: None)
    [s] Available Scrapy objects:
    [s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
    [s]   crawler    <scrapy.crawler.Crawler object at 0x00000000057B1C18>
    [s]   item       {}
    [s]   request    <GET http://quotes.toscrape.com>
    [s]   response   <200 http://quotes.toscrape.com>
    [s]   settings   <scrapy.settings.Settings object at 0x00000000057B1A58>
    [s]   spider     <DefaultSpider 'default' at 0x5ae19b0>
    [s] Useful shortcuts:
    [s]   shelp()           Shell help (print this help)
    [s]   fetch(req_or_url) Fetch request (or URL) and update local objects
    [s]   view(response)    View response in a browser
    >>>
  • 相关阅读:
    Solaris下批量kill掉oracle的session
    我写blog的方式变迁
    filezilla ftp client在win7 下获取ftp目录信息出错解决方法
    GNU System Monitor编译问题
    在vmware的Solaris虚拟机中安装vmtool
    关于golden gate director client的一点点使用总结
    测试 乱弹
    ORM的世界 (再补充)
    Yahoo Konfabulator
    轻量容器和注射依赖 的自实现
  • 原文地址:https://www.cnblogs.com/wuzhiyi/p/6022986.html
Copyright © 2011-2022 走看看