zoukankan      html  css  js  c++  java
  • scrapy中对于item的把控

    其实很简单,就是想要存储的位置发生改变。直接看例子,然后触类旁通。

    以大众点评 评论的内容为例 ,位置:http://www.dianping.com/shop/77489519/review_more?pageno=1

    数据存储形式由A 变成B

    A:

    展开的话这样子:

    B:

    本质上看,就是多个相同类型的item可以合并,不需要那么多,分别来看下各自的代码:

    A:

    class GengduopinglunSpider(scrapy.Spider):
        name = 'gengduopinglun'
        start_urls = ['http://www.dianping.com/shop/77489519/review_more?pageno=1']
    
        def parse(self, response):
            item=PinglunItem()
            comment = item['comment'] if "comment" in item else []
            for i in response.xpath('//div[@class="content"]'):
                for j in i.xpath('.//div[@class="J_brief-cont"]/text()').extract():
                    comment.append(j.strip())
            item['comment']=comment
            next_page = response.xpath(
                '//div[@class="Pages"]/div[@class="Pages"]/a[@class="NextPage"]/@href').extract_first()
            item['_id']=next_page
            # item['_id']='onlyone'
            if next_page != None:
                next_page = response.urljoin(next_page)
                # yield Request(next_page, callback=self.shop_comment,meta={'item': item})
                yield Request(next_page, callback=self.parse,)
            yield item

    B:

    class GengduopinglunSpider(scrapy.Spider):
        name = 'gengduopinglun'
        start_urls = ['http://www.dianping.com/shop/77489519/review_more?pageno=1']
    
        def parse(self, response):
            item=PinglunItem()
            comment = item['comment'] if "comment" in item else []
            for i in response.xpath('//div[@class="content"]'):
                for j in i.xpath('.//div[@class="J_brief-cont"]/text()').extract():
                    comment.append(j.strip())
            item['comment']=comment
            next_page = response.xpath(
                '//div[@class="Pages"]/div[@class="Pages"]/a[@class="NextPage"]/@href').extract_first()
            # item['_id']=next_page
            item['_id']='onlyone'
            if next_page != None:
                next_page = response.urljoin(next_page)
                yield Request(next_page, callback=self.shop_comment,meta={'item': item})
                # yield Request(next_page, callback=self.parse,)
            # yield item
    
        def shop_comment(self, response):
            item = response.meta['item']
            comment = item['comment'] if "comment" in item else []
            for i in response.xpath('//div[@class="content"]'):
                for j in i.xpath('.//div[@class="J_brief-cont"]/text()').extract():
                    comment.append(j.strip())
            item['comment']=comment
            next_page = response.xpath(
                '//div[@class="Pages"]/div[@class="Pages"]/a[@class="NextPage"]/@href').extract_first()
            if next_page != None:
                next_page = response.urljoin(next_page)
                yield Request(next_page, callback=self.shop_comment,meta={'item': item})
            yield item

    B里面是有重复代码的,这个无关紧要,只是演示,注意看两个yield 的区别

    以上只是演示scrapy中yield的用法,用来控制item,其余pipline,setting未展示.

  • 相关阅读:
    写一个列表生成式,产生一个公差为11的等差数列
    如果对方网站反爬取,封IP了怎么办?
    为什么会选择redis数据库?
    你是否了解谷歌的无头浏览器?
    遇到的反爬虫策略以及解决方法?
    常见的HTTP方法有哪些?
    遇到反爬机制怎么处理?
    列举网络爬虫所用到的网络数据包,解析包?
    python中的关键字yield有什么作用?
    如下代码输出的是什么?
  • 原文地址:https://www.cnblogs.com/dahu-daqing/p/7769040.html
Copyright © 2011-2022 走看看