zoukankan      html  css  js  c++  java
  • Scrapy入门教程

    Python 版本要注意。

    此Scrapy版本为0.14

    在这篇入门教程中,我们假定你已经安装了Scrapy。如果你还没有安装,那么请参考安装指南

    我们将使用开放目录项目(dmoz)作为抓取的例子。

    这篇入门教程将引导你完成如下任务:

    1. 创建一个新的Scrapy项目
    2. 定义提取的Item
    3. 写一个Spider用来爬行站点,并提取Items
    4. 写一个Item Pipeline用来存储提取出的Items

    Scrapy是由Python编写的。如果你是Python新手,你也许希望从了解Python开始,以期最好的使用Scrapy。如果你对其它编程语言熟悉,想快速的学习Python,这里推荐 Dive Into Python。如果你对编程是新手,且想从Python开始学习编程,请看下面的对非程序员的Python资源列表

    新建工程

    在抓取之前,你需要新建一个Scrapy工程。进入一个你想用来保存代码的目录,然后执行:

    Microsoft Windows XP [Version 5.1.2600]
    (C) Copyright 1985-2001 Microsoft Corp.
    
    E:>scrapy startproject tutorial
    E:>

    这个命令会在当前目录下创建一个新目录tutorial,它的结构如下:

     
    E:	utorial>tree /f
    Folder PATH listing
    Volume serial number is 0006EFCF C86A:7C52
    T:.
    │  scrapy.cfg
    │
    └─tutorial
        │  items.py
        │  pipelines.py
        │  settings.py
        │  __init__.py
        │
        └─spiders
                __init__.py
     

    这些文件主要是:

    • scrapy.cfg: 项目配置文件
    • tutorial/: 项目python模块, 呆会代码将从这里导入
    • tutorial/items.py: 项目items文件
    • tutorial/pipelines.py: 项目管道文件
    • tutorial/settings.py: 项目配置文件
    • tutorial/spiders: 放置spider的目录

    定义Item

    Items是将要装载抓取的数据的容器,它工作方式像python里面的字典,但它提供更多的保护,比如对未定义的字段填充以防止拼写错误。

    它通过创建一个scrapy.item.Item类来声明,定义它的属性为scrpy.item.Field对象,就像是一个对象关系映射(ORM). 
    我们通过将需要的item模型化,来控制从dmoz.org获得的站点数据,比如我们要获得站点的名字,url和网站描述,我们定义这三种属性的域。要做到这点,我们编辑在tutorial目录下的items.py文件,我们的Item类将会是这样

    from scrapy.item import Item, Field 
    class DmozItem(Item):
        title = Field()
        link = Field()
        desc = Field()

    刚开始看起来可能会有些困惑,但是定义这些item能让你用其他Scrapy组件的时候知道你的 items到底是什么。

    我们的第一个爬虫(Spider)

    Spider是用户编写的类,用于从一个域(或域组)中抓取信息。

    他们定义了用于下载的URL的初步列表,如何跟踪链接,以及如何来解析这些网页的内容用于提取items。

    要建立一个Spider,你必须为scrapy.spider.BaseSpider创建一个子类,并确定三个主要的、强制的属性:

    • name:爬虫的识别名,它必须是唯一的,在不同的爬虫中你必须定义不同的名字.
    • start_urls:爬虫开始爬的一个URL列表。爬虫从这里开始抓取数据,所以,第一次下载的数据将会从这些URLS开始。其他子URL将会从这些起始URL中继承性生成。
    • parse():爬虫的方法,调用时候传入从每一个URL传回的Response对象作为参数,response将会是parse方法的唯一的一个参数,

    这个方法负责解析返回的数据、匹配抓取的数据(解析为item)并跟踪更多的URL。

    这是我们的第一只爬虫的代码,将其命名为dmoz_spider.py并保存在tutorialspiders目录下。

     
    from scrapy.spider import BaseSpider
    
    class DmozSpider(BaseSpider):
        name = "dmoz"
        allowed_domains = ["dmoz.org"]
        start_urls = [
            "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
            "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
        ]
    
        def parse(self, response):
            filename = response.url.split("/")[-2]
            open(filename, 'wb').write(response.body)
     

    爬爬爬

    为了让我们的爬虫工作,我们返回项目主目录执行以下命令

    E:	utorial>scrapy crawl dmoz

    crawl dmoz 命令从dmoz.org域启动爬虫。 你将会获得如下类似输出 

     

    E: utorial>scrapy crawl dmoz
    2016-04-05 16:19:53+0800 [scrapy] INFO: Scrapy 0.14.4 started (bot: tutorial)
    2016-04-05 16:19:53+0800 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetCon
    sole, CloseSpider, WebService, CoreStats, SpiderState
    2016-04-05 16:19:53+0800 [scrapy] DEBUG: Enabled downloader middlewares: HttpAut
    hMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, De
    faultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMi
    ddleware, ChunkedTransferMiddleware, DownloaderStats
    2016-04-05 16:19:53+0800 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMi
    ddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddle
    ware
    2016-04-05 16:19:53+0800 [scrapy] DEBUG: Enabled item pipelines:
    2016-04-05 16:19:53+0800 [dmoz] INFO: Spider opened
    2016-04-05 16:19:53+0800 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped
    0 items (at 0 items/min)
    2016-04-05 16:19:53+0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:602
    3
    2016-04-05 16:19:53+0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
    2016-04-05 16:19:54+0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Co
    mputers/Programming/Languages/Python/Books/> (referer: None)
    2016-04-05 16:19:54+0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Co
    mputers/Programming/Languages/Python/Resources/> (referer: None)
    2016-04-05 16:19:54+0800 [dmoz] INFO: Closing spider (finished)
    2016-04-05 16:19:54+0800 [dmoz] INFO: Dumping spider stats:
    {'downloader/request_bytes': 486,
    'downloader/request_count': 2,
    'downloader/request_method_count/GET': 2,
    'downloader/response_bytes': 16366,
    'downloader/response_count': 2,
    'downloader/response_status_count/200': 2,
    'finish_reason': 'finished',
    'finish_time': datetime.datetime(2016, 4, 5, 8, 19, 54, 35000),
    'scheduler/memory_enqueued': 2,
    'start_time': datetime.datetime(2016, 4, 5, 8, 19, 53, 421000)}
    2016-04-05 16:19:54+0800 [dmoz] INFO: Spider closed (finished)
    2016-04-05 16:19:54+0800 [scrapy] INFO: Dumping global stats:
    {}

     

    注意包含 [dmoz]的行 ,那对应着我们的爬虫。你可以看到start_urls中定义的每个URL都有日志行。因为这些URL是起始页面,所以他们没有引用(referrers),所以在每行的末尾你会看到 (referer: <None>). 
    有趣的是,在我们的 parse  方法的作用下,两个文件被创建:分别是 Books 和 Resources,这两个文件中有URL的页面内容。 

    发生了什么事情?

    Scrapy为爬虫的 start_urls属性中的每个URL创建了一个 scrapy.http.Request 对象 ,并将爬虫的parse 方法指定为回调函数。 
    这些 Request首先被调度,然后被执行,之后通过parse()方法,scrapy.http.Response 对象被返回,结果也被反馈给爬虫。

    提取Item

    选择器介绍

    我们有很多方法从网站中提取数据。Scrapy 使用一种叫做 XPath selectors的机制,它基于 XPath表达式。如果你想了解更多selectors和其他机制你可以查阅资料http://doc.scrapy.org/topics/selectors.html#topics-selectors 
    这是一些XPath表达式的例子和他们的含义

    • /html/head/title: 选择HTML文档<head>元素下面的<title> 标签。
    • /html/head/title/text(): 选择前面提到的<title> 元素下面的文本内容
    • //td: 选择所有 <td> 元素
    • //div[@class="mine"]: 选择所有包含 class="mine" 属性的div 标签元素

    这只是几个使用XPath的简单例子,但是实际上XPath非常强大。如果你想了解更多XPATH的内容,我们向你推荐这个XPath教程 http://doc.scrapy.org/en/0.12/topics/selectors.html

    为了方便使用XPaths,Scrapy提供XPathSelector 类, 有两种口味可以选择, HtmlXPathSelector (HTML数据解析) 和XmlXPathSelector (XML数据解析)。 为了使用他们你必须通过一个 Response 对象对他们进行实例化操作。你会发现Selector对象展示了文档的节点结构。因此,第一个实例化的selector必与根节点或者是整个目录有关 。 
    Selectors 

    XPathSelector objects

    class scrapy.selector.XPathSelector(response)

    XPathSelector object is a wrapper over response to select certain parts of its content.

    response is a Response object that will be used for selecting and extracting data

    select(xpath)

    Apply the given XPath relative to this XPathSelector and return a list of XPathSelector objects (ie. aXPathSelectorList) with the result.

    xpath is a string containing the XPath to apply

    re(regex)

    Apply the given regex and return a list of unicode strings with the matches.

    regex can be either a compiled regular expression or a string which will be compiled to a regular expression usingre.compile(regex)

    extract()

    Return a unicode string with the content of this XPathSelector object.

    register_namespace(prefixuri)

    Register the given namespace to be used in this XPathSelector. Without registering namespaces you can’t select or extract data from non-standard namespaces. See examples below.

    __nonzero__()

    Returns True if there is any real content selected by this XPathSelector or False otherwise. In other words, the boolean value of an XPathSelector is given by the contents it selects.

    为了演示Selectors的用法,我们将用到 内建的Scrapy shell,这需要系统已经安装IPython (一个扩展python交互环境) 。

    附IPython下载地址:http://pypi.python.org/pypi/ipython#downloads

    要开始shell,首先进入项目顶层目录,然后输入

    E:	utorial>scrapy shell http://www.dmoz.org/Computers/Programming/Languages/Python/Books/

    输出结果类似这样:

     

    E: utorial>scrapy shell http://www.dmoz.org/Computers/Programming/Languages/Pyt
    hon/Books/
    2016-04-05 16:21:33+0800 [scrapy] INFO: Scrapy 0.14.4 started (bot: tutorial)
    2016-04-05 16:21:33+0800 [scrapy] DEBUG: Enabled extensions: TelnetConsole, Clos
    eSpider, WebService, CoreStats, SpiderState
    2016-04-05 16:21:33+0800 [scrapy] DEBUG: Enabled downloader middlewares: HttpAut
    hMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, De
    faultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMi
    ddleware, ChunkedTransferMiddleware, DownloaderStats
    2016-04-05 16:21:33+0800 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMi
    ddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddle
    ware
    2016-04-05 16:21:33+0800 [scrapy] DEBUG: Enabled item pipelines:
    2016-04-05 16:21:33+0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:602
    3
    2016-04-05 16:21:33+0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
    2016-04-05 16:21:33+0800 [dmoz] INFO: Spider opened
    2016-04-05 16:21:33+0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Co
    mputers/Programming/Languages/Python/Books/> (referer: None)
    [s] Available Scrapy objects:
    [s] hxs <HtmlXPathSelector xpath=None data=u'<html lang="en"><head><met
    a http-equiv="'>
    [s] item {}
    [s] request <GET http://www.dmoz.org/Computers/Programming/Languages/Python
    /Books/>
    [s] response <200 http://www.dmoz.org/Computers/Programming/Languages/Python
    /Books/>
    [s] settings <CrawlerSettings module=<module 'tutorial.settings' from 'E: u
    torial utorialsettings.pyc'>>
    [s] spider <DmozSpider 'dmoz' at 0x268b1b0>
    [s] Useful shortcuts:
    [s] shelp() Shell help (print this help)
    [s] fetch(req_or_url) Fetch request (or URL) and update local objects
    [s] view(response) View response in a browser
    WARNING: Readline services not available or not loaded.
    WARNING: Proper color support under MS Windows requires the pyreadline library.
    You can find it at:
    http://ipython.org/pyreadline.html

    Defaulting color scheme to 'NoColor'
    Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)]
    Type "copyright", "credits" or "license" for more information.

    IPython 4.1.2 -- An enhanced Interactive Python.
    ? -> Introduction and overview of IPython's features.
    %quickref -> Quick reference.
    help -> Python's own help system.
    object? -> Details about 'object', use 'object??' for extra details.

    In [1]:

    
    
     

    Shell载入后,你将获得回应,这些内容被存储在本地变量 response 中,所以如果你输入response.body 你将会看到response的body部分,或者输入response.headers 来查看它的 header部分。 
    Shell也实例化了两种selectors,一个是解析HTML的  hxs 变量,一个是解析 XML 的 xxs 变量。我们来看看里面有什么:

     

    In [1]: hxs.select('//title')
    Out[1]: [<HtmlXPathSelector xpath='//title' data=u'<title>DMOZ - Computers: Prog
    ramming: La'>]

    In [2]: hxs.select('//title').extract()
    Out[2]: [u'<title>DMOZ - Computers: Programming: Languages: Python: Books</title
    >']

    In [3]: hxs.select('//title/text()')
    Out[3]: [<HtmlXPathSelector xpath='//title/text()' data=u'DMOZ - Computers: Prog
    ramming: Languages'>]

    In [4]: hxs.select('//title/text()').extract()
    Out[4]: [u'DMOZ - Computers: Programming: Languages: Python: Books']

    In [5]: hxs.select('//title/text()').re('(w+):')
    Out[5]: [u'Computers', u'Programming', u'Languages', u'Python']

     

    提取数据

    现在我们尝试从网页中提取数据。 
    你可以在控制台输入 response.body, 检查源代码中的 XPaths 是否与预期相同。然而,检查HTML源代码是件很枯燥的事情。为了使事情变得简单,我们使用Firefox的扩展插件Firebug。更多信息请查看Using Firebug for scraping Using Firefox for scraping.
    txw1958注:事实上我用的是Google Chrome的Inspect Element功能,而且可以提取元素的XPath。
    检查源代码后,你会发现我们需要的数据在一个 <ul>元素中,而且是第二个<ul>。 
    我们可以通过如下命令选择每个在网站中的 <li> 元素:

    hxs.select('//ul/li') 

    然后是网站描述:

    hxs.select('//ul/li/text()').extract()

    网站标题:

    hxs.select('//ul/li/a/text()').extract()

    网站链接:

    hxs.select('//ul/li/a/@href').extract()

    如前所述,每个path()调用返回一个selectors列表,所以我们可以结合path()去挖掘更深的节点。我们将会用到这些特性,所以:

     
    sites = hxs.select('//ul/li')
    for site in sites:
        title = site.select('a/text()').extract()
        link = site.select('a/@href').extract()
        desc = site.select('text()').extract()
        print title, link, desc
     

    Note 
    更多关于嵌套选择器的内容,请阅读Nesting selectors 和 Working with relative XPaths

    将代码添加到爬虫中:

    wicub注:代码有修改,绿色注释掉的代码为原教程的,你懂的

     
    from scrapy.spider import BaseSpider
    from scrapy.selector import HtmlXPathSelector
    
    class DmozSpider(BaseSpider):
        name = "dmoz"
        allowed_domains = ["dmoz.org"]
        start_urls = [
            "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
            "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]    
      
        def parse(self, response):
            hxs = HtmlXPathSelector(response)
            sites = hxs.select('//fieldset/ul/li')
            #sites = hxs.select('//ul/li')
            for site in sites:
                title = site.select('a/text()').extract()
                link = site.select('a/@href').extract()
                desc = site.select('text()').extract()
                #print title, link, desc
                print title, link
     

    现在我们再次抓取dmoz.org,你将看到站点在输出中被打印 ,运行命令

    E:	utorial>scrapy crawl dmoz

    使用条目(Item)

    Item 对象是自定义的python字典,使用标准字典类似的语法,你可以获取某个字段(即之前定义的类的属性)的值:

    >>> item = DmozItem() 
    >>> item['title'] = 'Example title' 
    >>> item['title'] 
    'Example title' 

    Spiders希望将其抓取的数据存放到Item对象中。为了返回我们抓取数据,spider的最终代码应当是这样:

     
    from scrapy.spider import BaseSpider
    from scrapy.selector import HtmlXPathSelector
    
    from tutorial.items import DmozItem
    
    class DmozSpider(BaseSpider):
       name = "dmoz"
       allowed_domains = ["dmoz.org"]
       start_urls = [
           "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
           "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
       ]
    
       def parse(self, response):
           hxs = HtmlXPathSelector(response)
           sites = hxs.select('//fieldset/ul/li')
           #sites = hxs.select('//ul/li')
           items = []
           for site in sites:
               item = DmozItem()
               item['title'] = site.select('a/text()').extract()
               item['link'] = site.select('a/@href').extract()
               item['desc'] = site.select('text()').extract()
               items.append(item)
           return items
     

    现在我们再次抓取 : 

     

    2016-04-05 16:37:41+0800 [dmoz] DEBUG: Scraped from <200 http://www.dmoz.org/Com
    puters/Programming/Languages/Python/Books/>
    {'desc': [u' ',
    u' - By Chris Fe
    hily; Peachpit Press, 2002, ISBN 0201748843. Task-based, step-by-step visual ref
    erence guide, many screen shots, for courses in digital graphics; Web design, sc
    ripting, development; multimedia, page layout, office tools, operating systems.
    [Prentice Hall]
    ',
    u' '],
    'link': [u'http://www.pearsonhighered.com/educator/academic/product/0,,
    0201748843,00%2Ben-USS_01DBC.html'],
    'title': [u'Python: Visual QuickStart Guide']}
    2016-04-05 16:37:41+0800 [dmoz] DEBUG: Scraped from <200 http://www.dmoz.org/Com
    puters/Programming/Languages/Python/Books/>
    {'desc': [u' ',
    u' - By Ivan Van
    Laningham; Sams Publishing, 2000, ISBN 0672317354. Split into 24 hands-on, 1 ho
    ur lessons; steps needed to learn topic: syntax, language features, OO design an
    d programming, GUIs (Tkinter), system administration, CGI. [Sams Publishing]
    ',
    u' '],
    'link': [u'http://www.informit.com/store/product.aspx?isbn=0672317354']
    ,
    'title': [u'Sams Teach Yourself Python in 24 Hours']}
    2016-04-05 16:37:41+0800 [dmoz] DEBUG: Scraped from <200 http://www.dmoz.org/Com
    puters/Programming/Languages/Python/Books/>
    {'desc': [u' ',
    u' - By David Me
    rtz; Addison Wesley. Book in progress, full text, ASCII format. Asks for feedbac
    k. [author website, Gnosis Software, Inc.]
    ',
    u' '],
    'link': [u'http://gnosis.cx/TPiP/'],
    'title': [u'Text Processing in Python']}
    2016-04-05 16:37:41+0800 [dmoz] DEBUG: Scraped from <200 http://www.dmoz.org/Com
    puters/Programming/Languages/Python/Books/>
    {'desc': [u' ',
    u' - By Sean McG
    rath; Prentice Hall PTR, 2000, ISBN 0130211192, has CD-ROM. Methods to build XML
    applications fast, Python tutorial, DOM and SAX, new Pyxie open source XML proc
    essing library. [Prentice Hall PTR]
    ',
    u' '],
    'link': [u'http://www.informit.com/store/product.aspx?isbn=0130211192']
    ,
    'title': [u'XML Processing with Python']}
    2016-04-05 16:37:41+0800 [dmoz] INFO: Closing spider (finished)
    2016-04-05 16:37:41+0800 [dmoz] INFO: Dumping spider stats:
    {'downloader/request_bytes': 486,
    'downloader/request_count': 2,
    'downloader/request_method_count/GET': 2,
    'downloader/response_bytes': 16366,
    'downloader/response_count': 2,
    'downloader/response_status_count/200': 2,
    'finish_reason': 'finished',
    'finish_time': datetime.datetime(2016, 4, 5, 8, 37, 41, 116000),
    'item_scraped_count': 31,
    'scheduler/memory_enqueued': 2,
    'start_time': datetime.datetime(2016, 4, 5, 8, 37, 36, 402000)}
    2016-04-05 16:37:41+0800 [dmoz] INFO: Spider closed (finished)
    2016-04-05 16:37:41+0800 [scrapy] INFO: Dumping global stats:
    {}

     

    保存抓取的数据

    保存信息的最简单的方法是通过Feed exports,命令如下:

    E:	utorial>scrapy crawl dmoz -o items.json -t json
    

      


    'link': [u'http://www.informit.com/store/product.aspx?isbn=0672317354']

    'title': [u'Sams Teach Yourself Python in 24 Hours']}
    2016-04-05 16:39:45+0800 [dmoz] DEBUG: Scraped from <200 http://www.dmoz.org/Com
    puters/Programming/Languages/Python/Books/>
    {'desc': [u' ',
    u' - By David Me
    rtz; Addison Wesley. Book in progress, full text, ASCII format. Asks for feedbac
    k. [author website, Gnosis Software, Inc.]
    ',
    u' '],
    'link': [u'http://gnosis.cx/TPiP/'],
    'title': [u'Text Processing in Python']}
    2016-04-05 16:39:45+0800 [dmoz] DEBUG: Scraped from <200 http://www.dmoz.org/Com
    puters/Programming/Languages/Python/Books/>
    {'desc': [u' ',
    u' - By Sean McG
    rath; Prentice Hall PTR, 2000, ISBN 0130211192, has CD-ROM. Methods to build XML
    applications fast, Python tutorial, DOM and SAX, new Pyxie open source XML proc
    essing library. [Prentice Hall PTR]
    ',
    u' '],
    'link': [u'http://www.informit.com/store/product.aspx?isbn=0130211192']
    ,
    'title': [u'XML Processing with Python']}
    2016-04-05 16:39:45+0800 [dmoz] INFO: Closing spider (finished)
    2016-04-05 16:39:45+0800 [dmoz] INFO: Stored json feed (31 items) in: items.json

    2016-04-05 16:39:45+0800 [dmoz] INFO: Dumping spider stats:
    {'downloader/request_bytes': 486,
    'downloader/request_count': 2,
    'downloader/request_method_count/GET': 2,
    'downloader/response_bytes': 16366,
    'downloader/response_count': 2,
    'downloader/response_status_count/200': 2,
    'finish_reason': 'finished',
    'finish_time': datetime.datetime(2016, 4, 5, 8, 39, 45, 936000),
    'item_scraped_count': 31,
    'scheduler/memory_enqueued': 2,
    'start_time': datetime.datetime(2016, 4, 5, 8, 39, 45, 112000)}
    2016-04-05 16:39:45+0800 [dmoz] INFO: Spider closed (finished)
    2016-04-05 16:39:45+0800 [scrapy] INFO: Dumping global stats:
    {}

    所有抓取的items将以JSON格式被保存在新生成的items.json 文件中

    在像本教程一样的小型项目中,这些已经足够。然而,如果你想用抓取的items做更复杂的事情,你可以写一个 Item Pipeline(条目管道)。因为在项目创建的时候,一个专门用于条目管道的占位符文件已经随着items一起被建立,目录在tutorial/pipelines.py。如果你只需要存取这些抓取后的items的话,就不需要去实现任何的条目管道。



    结束语

    本教程简要介绍了Scrapy的使用,但是许多其他特性并没有提及。

    源代码下载 : http://files.cnblogs.com/files/wicub/tutorial.rar

    参考链接:

    http://doc.scrapy.org/en/0.12/topics/selectors.html

    http://scrapy-chs.readthedocs.org/zh_CN/1.0/intro/tutorial.html#id2

    http://www.cnblogs.com/txw1958/archive/2012/07/16/scrapy-tutorial.html

  • 相关阅读:
    41:和为S的两个数
    40:数组中只出现一次的数字
    39-2:平衡二叉树
    39:二叉树的深度
    38:数字在排序数组中出现的次数
    37:两个链表的第一个公共结点
    36:数组中的逆序对
    35:第一个只出现一次的字符
    34:丑数
    33:把数组排成最小的数
  • 原文地址:https://www.cnblogs.com/wicub/p/5355661.html
Copyright © 2011-2022 走看看