zoukankan      html  css  js  c++  java
  • 股票数据Scrapy爬虫实例(亲测有效)

    步骤:

    步骤1:建立工程和Spider模板

    • scrapy startproject BaiduStocks
    • cd BaiduStocks
    • scrapy genspider stocks baidu.com
    • 进一步修改spiders/stocks.py

    这一步自行完成~

    步骤2:编写Spider

    • 配置stocks.py文件
    • 修改对返回页面的处理
    • 修改对新增URL爬取请求的处理(stocks.py)
    # -*- coding: utf-8 -*-
    import scrapy
    import re
    
    
    class StocksSpider(scrapy.Spider):
        name = 'stocks'
        start_urls = ['http://quote.eastmoney.com/stock_list.html']
    
        def parse(self, response):
            for href in response.css('a::attr(href)').extract():
                try:
                    stock = re.findall(r"[s][hz]d{6}", href)[0]
                    url = 'http://gu.qq.com/' + stock + '/gp'
                    yield scrapy.Request(url, callback=self.parse_stock)
                except:
                    continue
    
        def parse_stock(self, response):
            infoDict = {}
            stockName = response.css('.title_bg')
            stockInfo = response.css('.col-2.fr')
            name = stockName.css('.col-1-1').extract()[0]
            code = stockName.css('.col-1-2').extract()[0]
            info = stockInfo.css('li').extract()
            for i in info[:13]:
                key = re.findall('>.*?<', i)[1][1:-1]
                key = key.replace('u2003', '')
                key = key.replace('xa0', '')
                try:
                    val = re.findall('>.*?<', i)[3][1:-1]
                except:
                    val = '--'
                infoDict[key] = val
    
            infoDict.update({'股票名称': re.findall('>.*<', name)[0][1:-1] + 
                                     re.findall('>.*<', code)[0][1:-1]})
            yield infoDict

    其中的key=re.replace('u2003',''),key=re.replace('xa0','')分别是为了除去爬取的字符串中的无用部分,如&nbsp等,网页抓取时会因为编码原因转化成xa0,所以我们需要进行替换,得到较为美观的字符串.

    步骤3:编写ITEM Pipelines

    • 配置pipelines.py文件
    • 定义对爬取项(Scrapy Item)的处理类(pipelines.py)
    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    
    
    class ScrapyGupiaoPipeline:
        def process_item(self, item, spider):
            return item
    
    class ScrapyGupiaoPipeline:
        def open_spider(self, spider):
            self.f = open('gupiao.txt', 'w')
    
        def close_spider(self, spider):
            self.f.close()
    
        def process_item(self, item, spider):
            try:
                line = str(dict(item)) + '
    '
                self.f.write(line)
            except:
                pass
            return item

    步骤四:配置ITEM_PIPELINES选项(settings.py)

    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
       'BaiduStocks.pipelines.ScrapyGupiaoPipeline': 300,
    }
  • 相关阅读:
    Mac OS X系统下的Android环境变量配置
    mac 终端 常用命令
    如何在mac本上安装android sdk
    让浏览器支持Webp
    ngCordova安装配置使用教程
    js中const,var,let区别
    avaScript技术面试时要小心的三个问题
    视频H5のVideo标签在微信里的坑和技巧
    Git 忽略一些文件不加入版本控制
    "The /usr/local directory is not writable."解决方法
  • 原文地址:https://www.cnblogs.com/qianmo123/p/14242595.html
Copyright © 2011-2022 走看看