zoukankan      html  css  js  c++  java
  • 010 Python网络爬虫与信息提取 股票数据定向爬虫

    [A]股票数据定向爬虫实例介绍

      功能描述

        目标:获取上交所和深交所所有股票的名称和交易信息

        输出:保存到文件中

      候选网站:

        1. 新浪股票:https://finance.sina.com.cn/

        2. 百度股票:https://gupiao.baidu.com/stock/

      候选网站选取:

        选取原则:互票信息静态的存储于HTML页面中,而非js代码生成,没有robots协议限制

        选取方法:浏览器F12,源代码查看

        选取心态:不纠结于某个网站,多找信息源尝试

       程序设计

        步骤1:从东方财富网获取股票信息列表

        步骤2:根据股票列表诸葛到百度股票获取个股信息

        步骤3:将结果存储于到文件中

    [B] 股票数据定向爬虫实例编写

    # CrawBaiduStocksA.py
    import requests
    from bs4 import BeautifulSoup
    import traceback
    import re
    
    
    def getHTMLText(url):
        try:
            r = requests.get(url)
            r.raise_for_status()
            r.encoding = r.apparent_encoding
            return r.text
        except:
            return ""
    
    
    def getStockList(lst, stockURL):
        html = getHTMLText(stockURL)
        soup = BeautifulSoup(html, 'html.parser')
        a = soup.find_all('a')
        for i in a:
            try:
                href = i.attrs['href']
                lst.append(re.findall(r"[s][hz]d{6}", href)[0])
            except:
                continue
    
    
    def getStockInfo(lst, stockURL, fpath):
        for stock in lst:
            url = stockURL + stock + ".html"
            html = getHTMLText(url)
            try:
                if html == "":
                    continue
                infoDict = {}
                soup = BeautifulSoup(html, 'html.parser')
                stockInfo = soup.find('div', attrs={'class': 'stock-bets'})
    
                name = stockInfo.find_all(attrs={'class': 'bets-name'})[0]
                infoDict.update({'股票名称': name.text.split()[0]})
    
                keyList = stockInfo.find_all('dt')
                valueList = stockInfo.find_all('dd')
                for i in range(len(keyList)):
                    key = keyList[i].text
                    val = valueList[i].text
                    infoDict[key] = val
    
                with open(fpath, 'a', encoding='utf-8') as f:
                    f.write(str(infoDict) + '
    ')
            except:
                traceback.print_exc()
                continue
    
    
    def main():
        stock_list_url = 'https://quote.eastmoney.com/stocklist.html'
        stock_info_url = 'https://gupiao.baidu.com/stock/'
        output_file = 'D:/BaiduStockInfo.txt'
        slist = []
        getStockList(slist, stock_list_url)
        getStockInfo(slist, stock_info_url, output_file)
    
    
    main()
    View Code
  • 相关阅读:
    mysql服务的注册,启动、停止、注销。 [delphi代码实现]
    java初始化
    git的使用
    jmeter测试
    Linux上安装Redis
    java多线程
    设计模式之装饰着模式
    IO流之字符流知识总结
    IO流之字节流知识总结
    java File类
  • 原文地址:https://www.cnblogs.com/carreyBlog/p/14018585.html
Copyright © 2011-2022 走看看