源码:链接:http://pan.baidu.com/s/1dEK82hb 密码:9flo
创建项目 scrapy startproject tutorial
爬取 scrapy crawl dmoz
爬取并保存为json格式 scrapy crawl dmoz -o items.json -t json
scrapy shell "网址/资源"
载入之后将能得到response的回应
response.body
response.headers
![](https://images2015.cnblogs.com/blog/986702/201705/986702-20170522124517023-51136378.jpg)
>>>response.xpath('//title')
>>>response.xpath('//title/text()').extract()
![](https://images2015.cnblogs.com/blog/986702/201705/986702-20170522124517398-406096169.png)
编辑Item:
![](https://images2015.cnblogs.com/blog/986702/201705/986702-20170522124517663-976016819.png)
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # http://doc.scrapy.org/en/latest/topics/items.html import scrapy class DmozItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() title = scrapy.Field() link = scrapy.Field() desc = scrapy.Field()
编辑蜘蛛:
![](https://images2015.cnblogs.com/blog/986702/201705/986702-20170522124518054-953639895.png)
![](https://images2015.cnblogs.com/blog/986702/201705/986702-20170522124518820-1398665280.png)
爬取并保存为json格式
![](https://images2015.cnblogs.com/blog/986702/201705/986702-20170522124519132-1249930718.png)
至此,根目录下会多出一个json文件
![](https://images2015.cnblogs.com/blog/986702/201705/986702-20170522124519476-1996629052.png)