zoukankan      html  css  js  c++  java
  • python爬虫实战:爬取股票信息,对上交所和深交所所有的股票信息进行搜集

    要用到两个网站:
    1.获取所有股票的名称的网址(这里指上交所和深交所的股票)
    https://www.banban.cn/gupiao/list_sz.html
    
    
    2.获取单个股票的各类信息
    https://gupiao.baidu.com/stock/股票名称.html

    '''
    要用到两个网站:
    1.获取所有股票的名称的网址(这里指上交所和深交所的股票)
    https://www.banban.cn/gupiao/list_sz.html
    
    
    2.获取单个股票的各类信息
    https://gupiao.baidu.com/stock/股票名称.html
    '''
    
    import requests
    from bs4 import BeautifulSoup
    import traceback
    import re
    
    #获取网页内容
    def getHTMLText(url, code="utf-8"):
        try:
            r = requests.get(url)
            r.raise_for_status()
            r.encoding = code
            return r.text
        except:
            return ""
    
    
    #获取所有的股票名称,将其放在一个列表中
    def getStockList(lst, stockURL):
        html = getHTMLText(stockURL, "GB2312")
        soup = BeautifulSoup(html, 'html.parser')
        a = soup.find_all('a')
        for i in a:
            try:
                href = i.attrs['href']
                lst.append(re.findall(r"d{6}", href)[0])
            except:
                continue
    
    
    def getStockInfo(lst, stockURL, fpath):
        count = 0
        for stock in lst:
            url = stockURL + "sz" + stock + ".html"#对应的每只股票的网址
            html = getHTMLText(url)
            try:
                if html == "":
                    continue
                infoDict = {}
                soup = BeautifulSoup(html, 'html.parser')
                stockInfo = soup.find('div', attrs={'class': 'stock-bets'})
    
                name = stockInfo.find_all(attrs={'class': 'bets-name'})[0]
                infoDict.update({'股票名称': name.text.split()[0]})
    
                keyList = stockInfo.find_all('dt')
                valueList = stockInfo.find_all('dd')
                for i in range(len(keyList)):
                    key = keyList[i].text
                    val = valueList[i].text
                    infoDict[key] = val
    
    #保存到本地,并加载进度条
                with open(fpath, 'a', encoding='utf-8') as f:
                    f.write(str(infoDict) + '
    ')
                    count = count + 1
                    print("
    当前进度: {:.2f}%".format(count * 100 / len(lst)), end="")
            except:
                count = count + 1
                print("
    当前进度: {:.2f}%".format(count * 100 / len(lst)), end="")
                continue
    
    
    def main():
        stock_list_url = 'https://www.banban.cn/gupiao/list_sz.html'
        stock_info_url = 'https://gupiao.baidu.com/stock/'
        output_file = 'D:/BaiduStockInfo.txt'
        slist = []
        getStockList(slist, stock_list_url)
        getStockInfo(slist, stock_info_url, output_file)
    
    
    main()
  • 相关阅读:
    JAVA网络编程入门
    悲观锁和乐观锁
    原子性---Atomic
    volatile关键字
    leetcode_111. 二叉树的最小深度
    leetcode_110. 平衡二叉树
    leetcode_108. 将有序数组转换为二叉搜索树
    leetcode_107. 二叉树的层次遍历 II
    leetcode_104. 二叉树的最大深度
    leetcode_101. 对称二叉树
  • 原文地址:https://www.cnblogs.com/Romantic-Chopin/p/12451036.html
Copyright © 2011-2022 走看看