zoukankan      html  css  js  c++  java
  • Python爬虫和函数调试

    一:函数调试

    用之前学过的try···except进行调试

    def gameover(setA,setB):
        if setA==3 or setB==3:
            return True
        else:
            return False
    try:
        a=gameover(7,11)
        print(a)
    except:
        print("Error")
    

      调试完毕~~~~

    结果如下

    输入7,8的结果

    输入3,4的结果

    不输入参数时,得到Error

    二:Python爬虫

    requests库是一个简洁且简单的处理HTTP请求的第三方库。

    get()是对应与HTTP的GET方式,获取网页的最常用方法,可以增加timeout=n 参数,设定每次请求超时时间为n秒

    text()是HTTP相应内容的字符串形式,即url对应的网页内容

    content()是HTTP相应内容的二进制形式

    用requests()打开搜狗主页20次

    # -*- coding: utf-8 -*-
    """
    Created on Mon May 20 10:20:45 2019
    
    @author: guo'yu'yi
    """

    import requests
    try:
    for i in range(20):
    r=get("https://123.sogou.com/")
    r.raise_for_status()
    r.encoding='utf-8'
    print(r)
    print(len(r.text))
    print(len(r.content))
    except:
    print("Error")

     结果如下:

    获取中国大学排名

    直接上代码

    import requests
    from bs4 import BeautifulSoup
    import pandas
    # 1. 获取网页内容
    def getHTMLText(url):
        try:
            r = requests.get(url, timeout = 30)
            r.raise_for_status()
            r.encoding = 'utf-8'
            return r.text
        except Exception as e:
            print("Error:", e)
            return ""
    
    # 2. 分析网页内容并提取有用数据
    def fillTabelList(soup): # 获取表格的数据
        tabel_list = []      # 存储整个表格数据
        Tr = soup.find_all('tr')
        for tr in Tr:
            Td = tr.find_all('td')
            if len(Td) == 0:
                continue
            tr_list = [] # 存储一行的数据
            for td in Td:
                tr_list.append(td.string)
            tabel_list.append(tr_list)
        return tabel_list
    
    # 3. 可视化展示数据
    def PrintTableList(tabel_list, num):
        # 输出前num行数据
        print("{1:^2}{2:{0}^10}{3:{0}^5}{4:{0}^5}{5:{0}^8}".format(chr(12288), "排名", "学校名称", "省市", "总分", "生涯质量"))
        for i in range(num):
            text = tabel_list[i]
            print("{1:{0}^2}{2:{0}^10}{3:{0}^5}{4:{0}^8}{5:{0}^10}".format(chr(12288), *text))
    
    # 4. 将数据存储为csv文件
    def saveAsCsv(filename, tabel_list):
        FormData = pandas.DataFrame(tabel_list)
        FormData.columns = ["排名", "学校名称", "省市", "总分", "生涯质量", "培养结果", "科研规模", "科研质量", "顶尖成果", "顶尖人才", "科技服务", "产学研合作", "成果转化"]
        FormData.to_csv(filename, encoding='utf-8', index=False)
    
    if __name__ == "__main__":
        url = "http://www.zuihaodaxue.cn/zuihaodaxuepaiming2016.html"
        html = getHTMLText(url)
        soup = BeautifulSoup(html, features="html.parser")
        data = fillTabelList(soup)
        #print(data)
        PrintTableList(data, 10)   # 输出前10行数据
        saveAsCsv("D:\python文件\daxuepaimingRank.csv", data)
  • 相关阅读:
    分布式锁实战,分布式锁方案选择
    数据库索引调优技巧
    GraphQL
    PDF添加水印
    word添加水印,.NET执行宏
    『OpenCV』在Cmake中设置指定的OpenCV路径
    『论文笔记』ArcFace: Additive Angular Margin Loss for Deep Face Recognition
    Dapr微服务应用开发系列3:服务调用构件块
    ClearLinux安装教程
    tail命令学习实例
  • 原文地址:https://www.cnblogs.com/gyy-15768200938/p/10897157.html
Copyright © 2011-2022 走看看