zoukankan      html  css  js  c++  java
  • 爬虫基础:重要的requests库

    什么是Request库

    Requests是用Python语言编写,基于urllib,采用Apache2 Licensed开源协议的HTTP库。它比urllib更加方便,可以节约我们大量的工作,完全满足HTTP测试需求。一句话--Python实现的简单易用的HTTP库。

    安装Requests

    pip3 install requests

    request详解

    • 实例引入
    import requests
    response = requests.get('https://www.baidu.com')
    print(type(response)) #<class 'requests.models.Response'>
    print(response.status_code) #200
    print(type(response.text))#class'str'
    print(response.text)#响应的内容,返回的页面html
    print(response.cookies)#<RequestsCookieJar[<Cookie  BDORZ=27315 for .baidu.com/>]>
    • 各种请求方法
    import requests
    requests.post('http://httpbin.org/post')
    requests.put('http://httpbin.org/put')
    requests.delete('http://httpbin.org/delete')
    requests.head('http://httpbin.org/get')
    requests.options('http://httpbin.org/get')
    • 请求

    1.基本用法

    import requests
    #response = requests.get('http://www.baidu.com')
    response = requests.get('http://httpbin.org/get')
    print(response.text)

    2.带参数的get请求

    import requests
    response = requests.get("http://httpbin.org/get?name=xiexie&age=22")
    print(response.text)

    这个参数编写起来蛮复杂的,以下是更清楚的做法,使用带参数的requests.get方法

    import requests
    data = {
            'name':'xiexie',
            'age':89
            }
    response = requests.get('http://httpbin.org/get',params=data)
    print(response.text)

    3.解析Json

    import requests
    response = requests.get('http://httpbin.org/get')
    print(response.text)
    print(response.json())
    print(type(response.json()))

    这在ajax请求时比较常用

    4.获取二进制数据

    import requests
    response = requests.get('https://github.com/favicon.ico')
    print(type(response.text),type(response.content))
    print(response.text)
    print(response.content)

    response.text是string类型,而response.content是二进制流

    保存二进制流到本地,图片、视频、音频都可以

    import requests
    response = requests.get('http://github.com/favicon.ico')
    with open('favicon.ico','wb') as f:
        f.write(response.content)
        f.close()

    5.添加headers作为爬虫来说,headers非常重要,演戏演全套。不然会被服务器识别出来被禁用。

    import requests
    response = requests.get('https://www.zhihu.com/explore')
    print(response.text)

    不用headers,直接返回400 Bad Request ,无法爬取,以下代码添加headers就能爬取了

    import requests
    headers = {
            'user-agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'
            }
    response = requests.get('https://www.zhihu.com/explore',headers=headers)
    print(response.text)

    6.基本POST请求,需要构造formdata

    import requests
    data = {'name':'xiexie','age':33} #传个字典
    response = requests.post('http://httpbin.org/post',data=data)
    print(response.text)

    response详解

    • response属性
    import requests
    response = requests.get('http://www.jianshu.com')
    print(type(response.status_code),response.status_code)
    print(type(response.headers),response.headers)
    print(type(response.cookies),response.cookies)
    print(type(response.url),response.url)
    print(type(response.history),response.history)
    import requests
    response = requests.get('http://jianshu.com')
    exit() if not response.status_code == 403 else print('Forbidden!')
    • request高级操作

    1.文件上传

    import requests
    files = {'file':open('favicon.ico','rb')}
    response = requests.post('http://httpbin.org/post',files=files)
    print(response.text)

    2.获取cookie

    import requests
    response = requests.get('https://www.baidu.com')
    print(response.cookies)#response.cookies是一个列表
    for key,value in response.cookies.items():
        print(key + "=" + value)

    3.会话维持,用来模拟登陆用的。

    import requests
    requests.get('http://httpbin.org/cookies/set/number/1234567')
    response = requests.get('http://httpbin.org/cookies')
    print(response.text)

    这里使用set设置了一个cookie,本意是下一句使用requests.get调用时希望返回这个cookie,实际返回cookies为空。实际上调用2次requests.get是互相独立的,相当于用不同的浏览器打开网页。

    如果想返回刚刚设置的cookie需要保持会话。以下是会话维持的代码,相当于用同一个浏览器打开。

    import requests
    s = requests.Session()
    s.get('http://httpbin.org/cookies/set/number/1234567')
    response = s.get('http://httpbin.org/cookies')
    print(response.text)

    返回值:

    {
        "cookies": {
            "number": "1234567"
        }
    }

    • 证书验证

    有时候打开https的网站,而这个网站提供的证书没有通过验证,那么抛出ssl错误导致程序中断。为了防止这种情况,可以用varify参数。

    import requests
    from requests.packages import urllib3
    urllib3.disable_warnings()#调用原生的urllib3中的disable_warnings()可以消除警告信息。
    response = requests.get('https://www.12306.cn',verify=False)
    print(response.status_code)

    也可以手动导入ca证书和key,这样也不会弹出错误

    import requests
    response = requests.get('https://www.12306.cn',cert={'parth/server.crt','/path/key'})
    print(response.status_code)
    • 代理设置
    import request
    proxies = {
        "http":"http://127.0.0.1:9998",
        "https":"https://127.0.0.1:9998",
    }
    response = requests.get("https://www.taobao.com",proxies=proxies)
    print(response.status_code)

    有用户名密码的代理

    import request
    proxies = {
        "http":"http://user:password@127.0.0.1:9998",
        }
    response = requests.get("https://www.taobao.com",proxies=proxies)
    print(response.status_code)

    ssr这种类型的socks代理怎么使用?

    安装:pip3 install 'requests[socks]'

    import requests
    proxies = {
        "http":"sock5://127.0.0.1:9998",
        "https":"sock5://127.0.0.1:9998",
    }
    response = requests.get("https://www.taobao.com",proxies=proxies)
    print(response.status_code)
    }
    • 超时设置
    import requests
    try:
        response = requests.get("http://httpbin.org/get",timeout=1)
        print(response.status_code)
    except requests.ReadTimeout:
        print('Timeout')
    • 认证设置
    import requests
    response = requests.get("http://httpbin.org/get",auth=HTTPBasicAuth('user','123'))
    print(response.status_code)

    另一种字典的方式

    import requests
    response = requests.get("http://httpbin.org/get",auth={'user':'123'})
    print(response.status_code)
    • 异常处理,爬虫的异常处理也很有必要
    import requests
    from requests.exceptions import ReadTimeout,ConnectionError,RequestException
    try:
        response = requests.get("http://httpbin.org/get",timeout=0.2)
        print(response.status_code)
    except ReadTimeout:
        print('Timeout')
    except ConnectionError:
        print("Con error")
    except RequestException:
        print('Error')
  • 相关阅读:
    [BX]和loop指令02 零基础入门学习汇编语言24
    第一个程序03 零基础入门学习汇编语言22
    第一个程序03 零基础入门学习汇编语言22
    第一个程序02 零基础入门学习汇编语言21
    [BX]和loop指令01 零基础入门学习汇编语言23
    [BX]和loop指令03 零基础入门学习汇编语言25
    [BX]和loop指令03 零基础入门学习汇编语言25
    [BX]和loop指令02 零基础入门学习汇编语言24
    第一个程序02 零基础入门学习汇编语言21
    不要焦急——把您的应用程序转移到公共云的正确方法
  • 原文地址:https://www.cnblogs.com/x00479/p/14249699.html
Copyright © 2011-2022 走看看