网络爬虫(又被称为网页蜘蛛,网络机器人,在FOAF社区中间,更经常的称为网页追逐者),是一种按照一定的规则,自动地抓取万维网信息的程序或者脚本。另外一些不常使用的名字还有蚂蚁、自动索引、模拟程序或者蠕虫。
Requests
Requests 是使用 Apache2 Licensed 许可证的 基于Python开发的HTTP 库,其在Python内置模块的基础上进行了高度的封装,从而使得Pythoner进行网络请求时,变得美好了许多,使用Requests可以轻而易举的完成浏览器可有的任何操作。
Python标准库中提供了:urllib、urllib2、httplib等模块以供Http请求,但是,它的 API 太渣了。它是为另一个时代、另一个互联网所创建的。它需要巨量的工作,甚至包括各种方法覆盖,来完成最简单的任务。
例子:
#!/usr/bin/env python # -*- coding: utf-8 -*- import urllib2, json, cookielib def urllib2_request(url, method='GET', cookie='', headers={}, data=None): """ :param url: 请求URL :param method: 请求方式--GET, POST, DELETE, PUT, HEAD ... :param cookie: 要传入的cookie, cookie='k1=v1;k2=v2' :param headers: http请求头,headers={'ContentType':'application/json; charset=UTF-8'} :param data: 要发送的数据,GET方式需要传入参数,data={'data':'v1'} :return: 返回元祖,响应的字符串内容和cookiejar对象 对于cookiejar对象可以使用for循环访问: for ck in cookiejar: print(ck.name, ck,value) """ if data: data = json.dumps(data) cookie_jar = cookielib.CookieJar() handler = urllib2.HTTPCookieProcessor(cookie_jar) opener = urllib2.build_opener(handler) opener.addheaders.append(['Cookie','k1=v1;k2=v2']) request = urllib2.Request(url=url,data=data,headers=headers) request.get_method = lambda:method response = opener.open(request) origin = response.read() return(origin,cookie_jar) # GET result = urllib2_request('http://www.baidu.com/',method='GET') #POST result2 = urllib2_request('http://www.baidu.com/',method='POST',data={'k1':'v1'}) #PUT result2 = urllib2_request('http://www.baidu.com/',method='PUT',data={'k1':'v1'})
1、GET请求
#!/usr/bin/env python # -*- code: utf-8 -*- import requests #requests GET无参请求 ret = requests.get('https://www.baidu.com/') print(ret.url) print(ret.text) # 2、有参数实例 import requests payload = {'key1': 'value1', 'key2': 'value2'} ret = requests.get("http://httpbin.org/get", params=payload) print(ret.url) print(ret.text)
请求和响应相关均封装在 ret 对象中。
2、POST请求
#!/usr/bin/env python # -*- code: utf-8 -*- import requests import json # 基本post实例 payload = {'k1':'v1','k2':'v2'} res = requests.post('http://httpbin.org/post',data=payload) print(res.text) # 发送请求头和数据实例 # url = 'https://api.github.com/some/endpoint' url = 'https://api.github.com/some/endpoint' payload = {'some':'data'} headers = {'content-type':'application/json'} response = requests.post(url, data=json.dumps(payload), headers=headers) print(response.text) print(response.cookies)
3、其他请求
requests.get(url, params=None, **kwargs) requests.post(url, data=None, json=None, **kwargs) requests.put(url, data=None, **kwargs) requests.head(url, **kwargs) requests.delete(url, **kwargs) requests.patch(url, data=None, **kwargs) requests.options(url, **kwargs) # 以上方法均是在此方法的基础上构建 requests.request(method, url, **kwargs)
requests模块已经将常用的Http请求方法为用户封装完成,用户直接调用其提供的相应方法即可,其中方法的所有参数有:
def request(method, url, **kwargs): """Constructs and sends a :class:`Request <Request>`. :param method: method for the new :class:`Request` object. :param url: URL for the new :class:`Request` object. :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`. :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json data to send in the body of the :class:`Request`. :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`. :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`. :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload. ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')`` or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers to add for the file. :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple :param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed. :type allow_redirects: bool :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy. :param verify: (optional) whether the SSL cert will be verified. A CA_BUNDLE path can also be provided. Defaults to ``True``. :param stream: (optional) if ``False``, the response content will be immediately downloaded. :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair. :return: :class:`Response <Response>` object :rtype: requests.Response Usage:: >>> import requests >>> req = requests.request('GET', 'http://httpbin.org/get') <Response [200]> """ # By using the 'with' statement we are sure the session is closed, thus we # avoid leaving sockets open which can trigger a ResourceWarning in some # cases, and look like a memory leak in others. with sessions.Session() as session: return session.request(method=method, url=url, **kwargs)
更多requests模块相关的文档见:http://cn.python-requests.org/zh_CN/latest/