zoukankan      html  css  js  c++  java
  • Python标准库3.4.3urllib.request21.6

    21.6. urllib.request — Extensible library for opening URLs

    翻译:Z.F.


    The urllib.request module defines functions and classes which help in opening URLs (mostly HTTP) in a complex world — basic and digest authentication, redirections, cookies and more.

    urllib.request模块定义函数和类帮你以复杂的方式打开url,包括基本和摘要的认证,重定向,cookie和其他。

    The urllib.request module defines the following functions:

    urllib.request模块定义了以下函数:

    urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, cadefault=False, context=None)

    Open the URL url, which can be either a string or a Request object.

    打开一个url,该url可以是一个字符串或者一个 Request 对象。

    data must be a bytes object specifying additional data to be sent to the server, or None if no such data is needed. data may also be an iterable object and in that case Content-Length value must be specified in the headers. Currently HTTP requests are the only ones that use data; the HTTP request will be a POST instead of a GET when the data parameter is provided.

    data参数必须是一个字节对象 ,该对象指明发送到服务器的附加数据。如果不需要发送数据,可以是None。data参数还可以是一个可迭代的对象,这时,Content-Length 的http消息头部,就必须指定。当前的 HTTP requests对象是使用data参数的唯一对象。当该函数提供data参数的时候,request对象将用post方法代替get方法。

    data should be a buffer in the standard application/x-www-form-urlencoded format. The urllib.parse.urlencode() function takes a mapping or sequence of 2-tuples and returns a string in this format. It should be encoded to bytes before being used as the data parameter. The charset parameter in Content-Type header may be used to specify the encoding. If charset parameter is not sent with the Content-Type header, the server following the HTTP 1.1 recommendation may assume that the data is encoded in ISO-8859-1 encoding. It is advisable to use charset parameter with encoding used in Content-Type header with the Request.

    data应该是一个标准的application/x-www-form-urlencoded格式的缓存,urllib.parse.urlencode()函数使用一个2个参数的元组,或者序列,并且用这种格式返回一个字符串。它应该被编码成字节在当作data参数使用的时候。Content-Type头部的charset参数用来指明编码方式。如果Content-Type头部没有传递charset参数,http1.1会假定你使用的是 ISO-8859-1 编码。在 Request对象中使用Content-Type头部传递的编码方式是你明智的选择。

    urllib.request module uses HTTP/1.1 and includes Connection:close header in its HTTP requests.

    本模块使用HTTP/1.1版本的协议,并且他的http请求中包含 Connection:close 头部。

    The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). This actually only works for HTTP, HTTPS and FTP connections.

    可选的参数 timeout 参数在秒的级别规定了一个连接尝试的阻塞等待时间(如果没有指定该参数,全局的默认的超时时间将被启用),这个参数实际上只对http,https,ftp链接有效。

    If context is specified, it must be a ssl.SSLContext instance describing the various SSL options. See HTTPSConnection for more details.

    如果 context 参数被指定,它必须是一个带有各样SLL选项的ssl.SSLContext实例。请查阅 HTTPSConnection 看更多细节。

    The optional cafile and capath parameters specify a set of trusted CA certificates for HTTPS requests. cafile should point to a single file containing a bundle of CA certificates, whereas capath should point to a directory of hashed certificate files. More information can be found in ssl.SSLContext.load_verify_locations().

    可选的 cafile 和 capath 参数用来为https请求指明一套可信的CA证书。cafile 必须是包含了一套证书的单独的文件,capath 则应该指定这些证书文件的目录。在ssl.SSLContext.load_verify_locations()函数中可以找到更多信息。

    The cadefault parameter is ignored.

    cadefault 参数将被忽略。

    For http and https urls, this function returns a http.client.HTTPResponse object which has the following HTTPResponse Objects methods.

    对于http 和 https,这个函数将返回一个http.client.HTTPResponse对象。

    For ftp, file, and data urls and requests explicity handled by legacy URLopener and FancyURLopener classes, this function returns a urllib.response.addinfourl object which can work as context manager and has methods such as

    对于ftp,file和 data url ,request对象将被继承的 URLopener 和 FancyURLopener类来处理。这个函数返回一个 urllib.response.addinfourl 对象,此对象将作为一个上下文管理器来工作,并且包括以下的函数:

        1、geturl() — return the URL of the resource retrieved, commonly used to determine if a redirect was followed。
         geturl() — 得到资源返回来的url,一般用于资源被重定向的时候。
        2、info() — return the meta-information of the page, such as headers, in the form of an email.message_from_string() instance (see Quick Reference to HTTP Headers)
         info() — 返回页面的元信息,比如:头信息。以一个email.message_from_string() 实例的形式(清查看HTTP Headers)。
        3、getcoe() – return the HTTP status code of the response.
        getcode() – 返回response的http状态码。

    Raises URLError on errors.

    如果出错,就抛出URLError。

    Note that None may be returned if no handler handles the request (though the default installed global OpenerDirector uses UnknownHandler to ensure this never happens).

    注意:如果equest请求没有被处理,将返回None.(请通过默认安装的全局的 OpenerDirector对象,使用UnknownHandler来确保此种情况不会发生)。

    In addition, if proxy settings are detected (for example, when a *_proxy environment variable like http_proxy is set), ProxyHandler is default installed and makes sure the requests are handled through the proxy.

    另外,如果代理设置被检测到(例如,一个 *_proxy (http_proxy)的环境变量被设置), ProxyHandler将被默认安装使用,来确保请求是通过代理提交的。

    The legacy urllib.urlopen function from Python 2.6 and earlier has been discontinued; urllib.request.urlopen() corresponds to the old urllib2.urlopen. Proxy handling, which was done by passing a dictionary parameter to urllib.urlopen, can be obtained by using ProxyHandler objects.

    旧的2.6版本之前的 urllib.urlopen 函数已经过时, urllib.request.urlopen() 函数用来代替旧的 urllib2.urlopen()。现在使用 ProxyHandler ,来代替旧的 urllib.urlopen的传递字典参数的的代理处理方式。

    Changed in version 3.2: cafile and capath were added.

    3.2版本的改变:增加 cafile 和 capath 参数。

    Changed in version 3.2: HTTPS virtual hosts are now supported if possible (that is, if ssl.HAS_SNI is true).

    3.2版本的改变:https虚拟主机现在可以被支持(这意味着, ssl.HAS_SNI 被设置为 true

    New in version 3.2: data can be an iterable object.

    3.2版本的更新:data是一个可迭代对象。

    Changed in version 3.3: cadefault was added.

    3.3版本的改变:增加 cadefault 参数。

    Changed in version 3.4.3: context was added.

    3.4.3版本的改变:增加 context 参数。

    urllib.request.install_opener(opener)

    Install an OpenerDirector instance as the default global opener. Installing an opener is only necessary if you want urlopen to use that opener; otherwise, simply call OpenerDirector.open() instead of urlopen(). The code does not check for a real OpenerDirector, and any class with the appropriate interface will work.

    安装 OpenerDirector实例作为一个全局的opener,仅当你需要使用那个opener时,才有必要安装这个penner。否则,请简单的调用 OpenerDirector.open()来替代urlopen()。本函数不检查OpenerDirector参数对象,实现正确接口的任何类都可以正常工作。

    urllib.request.build_opener([handler, ...])

    Return an OpenerDirector instance, which chains the handlers in the order given. handlers can be either instances of BaseHandler, or subclasses of BaseHandler (in which case it must be possible to call the constructor without any parameters). Instances of the following classes will be in front of the handlers, unless the handlers contain them, instances of them or subclasses of them: ProxyHandler (if proxy settings are detected), UnknownHandler, HTTPHandler, HTTPDefaultErrorHandler, HTTPRedirectHandler, FTPHandler, FileHandler, HTTPErrorProcessor.

    返回一个 OpenerDirector 实例,它用来增强已给出的旧的handle,handler参数可以是BaseHandler的实例,或者子类(必须可以调用无参构造函数)实例。除非handler参数列表包含以下类的实例或他们的子类,否则,下面的实例将处于handler参数列表的前面。

    If the Python installation has SSL support (i.e., if the ssl module can be imported), HTTPSHandler will also be added.

    如果python安装支持SSL(也就是说:如果可以导入 ssl 模块),HTTPSHandle也会被添加进来。

    A BaseHandler subclass may also change its handler_order attribute to modify its position in the handlers list.

    一个BaseHandle的子类也可以改变他的 handler_order属性来修改他在handlers列表的位置。

    urllib.request.pathname2url(path)

    Convert the pathname path from the local syntax for a path to the form used in the path component of a URL. This does not produce a complete URL. The return value will already be quoted using the quote() function.

    把本地的path从本地的语法变成一个url组件,这并不会产生一个完全的url,返回值将经过 quote()函数的处理。

    urllib.request.url2pathname(path)

    Convert the path component path from a percent-encoded URL to the local syntax for a path. This does not accept a complete URL. This function uses unquote() to decode path.

    将path组件path

    urllib.request.getproxies()

    This helper function returns a dictionary of scheme to proxy server URL mappings. It scans the environment for variables named <scheme>_proxy, in a case insensitive approach, for all operating systems first, and when it cannot find it, looks for proxy information from Mac OSX System Configuration for Mac OS X and Windows Systems Registry for Windows.

    The following classes are provided:

    下面的类被该模块提供:

    class urllib.request.Request(url, data=None, headers={}, origin_req_host=None, unverifiable=False, method=None)

    This class is an abstraction of a URL request.

    该类是一个url请求的抽象。

    url should be a string containing a valid URL.

    url参数应该是一个包含可用url的字符串。

    data must be a bytes object specifying additional data to send to the server, or None if no such data is needed. Currently HTTP requests are the only ones that use data; the HTTP request will be a POST instead of a GET when the data parameter is provided. data should be a buffer in the standard application/x-www-form-urlencoded format.

    data参数必须是一个字节对象 ,该对象指明发送到服务器的附加数据。如果不需要发送数据,可以是None。当前的http请求对象应该是唯一的data参数的使用者。当前的 HTTP requests对象是使用data参数的唯一对象。当该函数提供data参数的时候,request对象将用post方法代替get方法。data应该是一个标准的application/x-www-form-urlencoded格式的缓存。

    The urllib.parse.urlencode() function takes a mapping or sequence of 2-tuples and returns a string in this format. It should be encoded to bytes before being used as the data parameter. The charset parameter in Content-Type header may be used to specify the encoding. If charset parameter is not sent with the Content-Type header, the server following the HTTP 1.1 recommendation may assume that the data is encoded in ISO-8859-1 encoding. It is advisable to use charset parameter with encoding used in Content-Type header with the Request.

    urllib.parse.urlencode()函数使用一个2个参数的元组,或者序列,并且用这种格式返回一个字符串。它应该被编码成字节在当作data参数使用的时候。Content-Type头部的charset参数用来指明编码方式。如果Content-Type头部没有传递charset参数,http1.1会假定你使用的是 ISO-8859-1 编码。在 Request对象中使用Content-Type头部传递的编码方式是你明智的选择。

    headers should be a dictionary, and will be treated as if add_header() was called with each key and value as arguments. This is often used to “spoof” the User-Agent header, which is used by a browser to identify itself – some HTTP servers only allow requests coming from common browsers as opposed to scripts. For example, Mozilla Firefox may identify itself as "Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11", while urllib‘s default user agent string is "Python-urllib/2.6" (on Python 2.6).

    headers参数应该是一个字典,当add_header() 函数被调用时,它的每个key和value都会被会被当作参数对待。这经常被用来做 User-Agent头部欺骗,User-Agent头通常被浏览器用来表明它自己。因为一些服务器只允许请求来自一般的浏览器,而拒绝脚本的访问。例如:火狐浏览器使用"Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11"来表明自己的身份。但是urllib的默认User-Agent头是"Python-urllib/2.6" (在python2.6环境下)

    An example of using Content-Type header with data argument would be sending a dictionary like {"Content-Type":" application/x-www-form-urlencoded;charset=utf-8"}

    使用data参数,发送Content-Type头的一个例子,可能会发送一个这样的字典对象:{"Content-Type":" application/x-www-form-urlencoded;charset=utf-8"}

    The final two arguments are only of interest for correct handling of third-party HTTP cookies:

    最后2个参数仅仅对正确处理第三方cookies感兴趣:

    origin_req_host should be the request-host of the origin transaction, as defined by RFC 2965. It defaults to http.cookiejar.request_host(self). This is the host name or IP address of the original request that was initiated by the user. For example, if the request is for an image in an HTML document, this should be the request-host of the request for the page containing the image.

    origin_req_host 参数应该是 RFC 2965 定义的 事务起点的请求主机。它默认是http.cookiejar.request_host(self),这是一个由用户初始化的,最早发起请求的主机名字或者IP地址。例如:如果通过一个html文档请求一张图片,则包含图片的页面的电脑就是请求的主机。???

    unverifiable should indicate whether the request is unverifiable, as defined by RFC 2965. It defaults to False. An unverifiable request is one whose URL the user did not have the option to approve. For example, if the request is for an image in an HTML document, and the user had no option to approve the automatic fetching of the image, this should be true.

    unverifiable 参数应该像RFC 2965定义的那样指明这个请求是否是不可验证的。它的默认值是False,一个不可验证的请求是用户无法决定url的请求。例如,如果一个请求来自一个html文档请求的一幅图片,则用户无法决定是否自动取回这个图片,此时,该参数应该是 true 。

    method should be a string that indicates the HTTP request method that will be used (e.g. 'HEAD'). If provided, its value is stored in the method attribute and is used by get_method(). Subclasses may indicate a default method by setting the method attribute in the class itself.

    method参数该是一个指明了请求方法的字符串(举例:‘head’).如果指明该参数,则它的值被存储进method 属性,并被 get_method()方法使用,子类可以通过自己设定一个method属性来指明一个默认方法。

    Changed in version 3.3: Request.method argument is added to the Request class.

    3.3版本的改变:Request类增加了Request.method参数。

    Changed in version 3.4: Default Request.method may be indicated at the class level.

    3.4版本的改变:可以在类的级别指明默认的Request.method 。

    class urllib.request.OpenerDirector

    The OpenerDirector class opens URLs via BaseHandlers chained together. It manages the chaining of handlers, and recovery from errors.

    OpenerDirector类通过BaseHandlers类将打开的URL连接起来。该类管理着处理链和错误恢复。

    class urllib.request.BaseHandler

    This is the base class for all registered handlers — and handles only the simple mechanics of registration.

    这是所有注册的 handler的基类,它仅仅做简单的处理。

    class urllib.request.HTTPDefaultErrorHandler

    A class which defines a default handler for HTTP error responses; all responses are turned into HTTPError exceptions.

    class urllib.request.HTTPRedirectHandler

    A class to handle redirections.

    处理重定向的类。

    class urllib.request.HTTPCookieProcessor(cookiejar=None)

    A class to handle HTTP Cookies.

    处理cookie的类。

    class urllib.request.ProxyHandler(proxies=None)

    Cause requests to go through a proxy. If proxies is given, it must be a dictionary mapping protocol names to URLs of proxies. The default is to read the list of proxies from the environment variables <protocol>_proxy. If no proxy environment variables are set, then in a Windows environment proxy settings are obtained from the registry’s Internet Settings section, and in a Mac OS X environment proxy information is retrieved from the OS X System Configuration Framework.

    To disable autodetected proxy pass an empty dictionary.

    class urllib.request.HTTPPasswordMgr

    Keep a database of (realm, uri) -> (user, password) mappings.

    class urllib.request.HTTPPasswordMgrWithDefaultRealm

    Keep a database of (realm, uri) -> (user, password) mappings. A realm of None is considered a catch-all realm, which is searched if no other realm fits.

    class urllib.request.AbstractBasicAuthHandler(password_mgr=None)

    This is a mixin class that helps with HTTP authentication, both to the remote host and to a proxy. password_mgr, if given, should be something that is compatible with HTTPPasswordMgr; refer to section HTTPPasswordMgr Objects for information on the interface that must be supported.

    class urllib.request.HTTPBasicAuthHandler(password_mgr=None)

    Handle authentication with the remote host. password_mgr, if given, should be something that is compatible with HTTPPasswordMgr; refer to section HTTPPasswordMgr Objects for information on the interface that must be supported. HTTPBasicAuthHandler will raise a ValueError when presented with a wrong Authentication scheme.

    class urllib.request.ProxyBasicAuthHandler(password_mgr=None)

    Handle authentication with the proxy. password_mgr, if given, should be something that is compatible with HTTPPasswordMgr; refer to section HTTPPasswordMgr Objects for information on the interface that must be supported.

    class urllib.request.AbstractDigestAuthHandler(password_mgr=None)

    This is a mixin class that helps with HTTP authentication, both to the remote host and to a proxy. password_mgr, if given, should be something that is compatible with HTTPPasswordMgr; refer to section HTTPPasswordMgr Objects for information on the interface that must be supported.

    class urllib.request.HTTPDigestAuthHandler(password_mgr=None)

    Handle authentication with the remote host. password_mgr, if given, should be something that is compatible with HTTPPasswordMgr; refer to section HTTPPasswordMgr Objects for information on the interface that must be supported. When both Digest Authentication Handler and Basic Authentication Handler are both added, Digest Authentication is always tried first. If the Digest Authentication returns a 40x response again, it is sent to Basic Authentication handler to Handle. This Handler method will raise a ValueError when presented with an authentication scheme other than Digest or Basic.

    Changed in version 3.3: Raise ValueError on unsupported Authentication Scheme.

    class urllib.request.ProxyDigestAuthHandler(password_mgr=None)

    Handle authentication with the proxy. password_mgr, if given, should be something that is compatible with HTTPPasswordMgr; refer to section HTTPPasswordMgr Objects for information on the interface that must be supported.

    class urllib.request.HTTPHandler

    A class to handle opening of HTTP URLs.

    class urllib.request.HTTPSHandler(debuglevel=0, context=None, check_hostname=None)

    A class to handle opening of HTTPS URLs. context and check_hostname have the same meaning as in http.client.HTTPSConnection.

    Changed in version 3.2: context and check_hostname were added.

    class urllib.request.FileHandler

    Open local files.

    class urllib.request.DataHandler

    Open data URLs.

    New in version 3.4.

    class urllib.request.FTPHandler

    Open FTP URLs.

    class urllib.request.CacheFTPHandler

    Open FTP URLs, keeping a cache of open FTP connections to minimize delays.

    class urllib.request.UnknownHandler

    A catch-all class to handle unknown URLs.

    class urllib.request.HTTPErrorProcessor

    Process HTTP error responses.

    21.6.1. Request Objects

    The following methods describe Request‘s public interface, and so all may be overridden in subclasses. It also defines several public attributes that can be used by clients to inspect the parsed request.

    以下方法描述Request类的公共接口,所有很多都被子类覆写了。它还定义了几个公共属性可以被客户端用来检查请求。

    Request.full_url

    The original URL passed to the constructor.

    传递给构造函数的原始url。

    Changed in version 3.4.

    Request.full_url is a property with setter, getter and a deleter. Getting full_url returns the original request URL with the fragment, if it was present.

    Request.type

    The URI scheme.

    url协议。

    Request.host

    The URI authority, typically a host, but may also contain a port separated by a colon.

    一般是一个主机,也可能后面带一个冒号加端口号。

    Request.origin_req_host

    The original host for the request, without port.

    请求的原始主机,不带端口。

    Request.selector

    The URI path. If the Request  uses a proxy, then selector will be the full url that is passed to the proxy.

    url的路径,如果请求使用了代理服务器,则是传递给代理服务器的url。

    Request.data

    The entity body for the request, or None if not specified.

    请求的实体,没指定就是None。

    Changed in version 3.4: Changing value of Request.data now deletes “Content-Length” header if it was previously set or calculated.

    3.4版本的改变:

    Request.unverifiable

    boolean, indicates whether the request is unverifiable as defined by RFC 2965.

    布尔值,指示该请求是否可以验证。

    Request.method

    The HTTP request method to use. By default its value is None, which means that get_method() will do its normal computation of the method to be used. Its value can be set (thus overriding the default computation in get_method()) either by providing a default value by setting it at the class level in a Request subclass, or by passing a value in to the Request constructor via the method argument.

    http请求使用的方法。默认值是None,这意味着 get_method()方法将自己决定用哪个方法。他可以在类级别通过一个Request子类来设定一个默认值(比如:覆盖默认的get_method()方法),或者通过method参数给Request构造函数传递一个。

    New in version 3.3.

    3.3版本的更新

    Changed in version 3.4: A default value can now be set in subclasses; previously it could only be set via the constructor argument.

    Request.get_method()

    Return a string indicating the HTTP request method. If Request.method is not None, return its value, otherwise return 'GET' if Request.data is None, or 'POST' if it’s not. This is only meaningful for HTTP requests.

    返回一个字符串指明http请求的方法。如果方法不是None,则返回它的值,否则,返回GET或者POST。但这只对HTTP请求有意义。

    Changed in version 3.3: get_method now looks at the value of Request.method.

    3.3版本的改变: get_method的返回值和Request.method一致。

    Request.add_header(key, val)

    Add another header to the request. Headers are currently ignored by all handlers except HTTP handlers, where they are added to the list of headers sent to the server. Note that there cannot be more than one header with the same name, and later calls will overwrite previous calls in case the key collides. Currently, this is no loss of HTTP functionality, since all headers which have meaning when used more than once have a (header-specific) way of gaining the same functionality using only one header.

    为请求添加新的头部。除了HTTP头部,被加入首部清单发往服务器的其他头部,将会被忽略。注意:头部不能有相同的名字,当2次调用的key一样时,后面的调用将覆盖前面的设置。一般说来,这并不会损失HTTP的功能,因为所有的使用多次的有意义的头都仅仅通过一个头部获取特定功能。

    Request.add_unredirected_header(key, header)

    Add a header that will not be added to a redirected request.

    增加一个不添加到重定向请求的头部。

    Request.has_header(header)

    Return whether the instance has the named header (checks both regular and unredirected).

    检测是否这个请求实例含有这个头部(检查常规头部和重定向头部)。

    Request.remove_header(header)

    Remove named header from the request instance (both from regular and unredirected headers).

    New in version 3.4.

    Request.get_full_url()

    Return the URL given in the constructor.

    Changed in version 3.4.

    Returns Request.full_url

    Request.set_proxy(host, type)

    Prepare the request by connecting to a proxy server. The host and type will replace those of the instance, and the instance’s selector will be the original URL given in the constructor.

    Request.get_header(header_name, default=None)

    Return the value of the given header. If the header is not present, return the default value.

    Request.header_items()

    Return a list of tuples (header_name, header_value) of the Request headers.

    Changed in version 3.4: The request methods add_data, has_data, get_data, get_type, get_host, get_selector, get_origin_req_host and is_unverifiable that were deprecated since 3.3 have been removed.

    21.6.2. OpenerDirector Objects

    OpenerDirector instances have the following methods:

    OpenerDirector.add_handler(handler)

    handler should be an instance of BaseHandler. The following methods are searched, and added to the possible chains (note that HTTP errors are a special case).

    • protocol_open() — signal that the handler knows how to open protocol URLs.
    • http_error_type() — signal that the handler knows how to handle HTTP errors with HTTP error code type.
    • protocol_error() — signal that the handler knows how to handle errors from (non-http) protocol.
    • protocol_request() — signal that the handler knows how to pre-process protocol requests.
    • protocol_response() — signal that the handler knows how to post-process protocol responses.
    OpenerDirector.open(url, data=None[, timeout])

    Open the given url (which can be a request object or a string), optionally passing the given data. Arguments, return values and exceptions raised are the same as those of urlopen() (which simply calls the open() method on the currently installed global OpenerDirector). The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). The timeout feature actually works only for HTTP, HTTPS and FTP connections).

    OpenerDirector.error(proto, *args)

    Handle an error of the given protocol. This will call the registered error handlers for the given protocol with the given arguments (which are protocol specific). The HTTP protocol is a special case which uses the HTTP response code to determine the specific error handler; refer to the http_error_*() methods of the handler classes.

    Return values and exceptions raised are the same as those of urlopen().

    OpenerDirector objects open URLs in three stages:

    The order in which these methods are called within each stage is determined by sorting the handler instances.

    1. Every handler with a method named like protocol_request() has that method called to pre-process the request.

    2. Handlers with a method named like protocol_open() are called to handle the request. This stage ends when a handler either returns a non-None value (ie. a response), or raises an exception (usually URLError). Exceptions are allowed to propagate.

      In fact, the above algorithm is first tried for methods named default_open(). If all such methods return None, the algorithm is repeated for methods named like protocol_open(). If all such methods return None, the algorithm is repeated for methods named unknown_open().

      Note that the implementation of these methods may involve calls of the parent OpenerDirector instance’s open() and error() methods.

    3. Every handler with a method named like protocol_response() has that method called to post-process the response.

    21.6.3. BaseHandler Objects

    BaseHandler objects provide a couple of methods that are directly useful, and others that are meant to be used by derived classes. These are intended for direct use:

    BaseHandler.add_parent(director)

    Add a director as parent.

    BaseHandler.close()

    Remove any parents.

    The following attribute and methods should only be used by classes derived from BaseHandler.

    Note

    The convention has been adopted that subclasses defining protocol_request() or protocol_response() methods are named *Processor; all others are named *Handler.

    BaseHandler.parent

    A valid OpenerDirector, which can be used to open using a different protocol, or handle errors.

    BaseHandler.default_open(req)

    This method is not defined in BaseHandler, but subclasses should define it if they want to catch all URLs.

    This method, if implemented, will be called by the parent OpenerDirector. It should return a file-like object as described in the return value of the open() of OpenerDirector, or None. It should raise URLError, unless a truly exceptional thing happens (for example, MemoryError should not be mapped to URLError).

    This method will be called before any protocol-specific open method.

    BaseHandler.protocol_open(req)

    This method is not defined in BaseHandler, but subclasses should define it if they want to handle URLs with the given protocol.

    This method, if defined, will be called by the parent OpenerDirector. Return values should be the same as for default_open().

    BaseHandler.unknown_open(req)

    This method is not defined in BaseHandler, but subclasses should define it if they want to catch all URLs with no specific registered handler to open it.

    This method, if implemented, will be called by the parent OpenerDirector. Return values should be the same as for default_open().

    BaseHandler.http_error_default(req, fp, code, msg, hdrs)

    This method is not defined in BaseHandler, but subclasses should override it if they intend to provide a catch-all for otherwise unhandled HTTP errors. It will be called automatically by the OpenerDirector getting the error, and should not normally be called in other circumstances.

    req will be a Request object, fp will be a file-like object with the HTTP error body, code will be the three-digit code of the error, msg will be the user-visible explanation of the code and hdrs will be a mapping object with the headers of the error.

    Return values and exceptions raised should be the same as those of urlopen().

    BaseHandler.http_error_nnn(req, fp, code, msg, hdrs)

    nnn should be a three-digit HTTP error code. This method is also not defined in BaseHandler, but will be called, if it exists, on an instance of a subclass, when an HTTP error with code nnn occurs.

    Subclasses should override this method to handle specific HTTP errors.

    Arguments, return values and exceptions raised should be the same as for http_error_default().

    BaseHandler.protocol_request(req)

    This method is not defined in BaseHandler, but subclasses should define it if they want to pre-process requests of the given protocol.

    This method, if defined, will be called by the parent OpenerDirector. req will be a Request object. The return value should be a Request object.

    BaseHandler.protocol_response(req, response)

    This method is not defined in BaseHandler, but subclasses should define it if they want to post-process responses of the given protocol.

    This method, if defined, will be called by the parent OpenerDirector. req will be a Request object. response will be an object implementing the same interface as the return value of urlopen(). The return value should implement the same interface as the return value of urlopen().

    21.6.4. HTTPRedirectHandler Objects

    Note

    Some HTTP redirections require action from this module’s client code. If this is the case, HTTPError is raised. See RFC 2616 for details of the precise meanings of the various redirection codes.

    An HTTPError exception raised as a security consideration if the HTTPRedirectHandler is presented with a redirected url which is not an HTTP, HTTPS or FTP url.

    HTTPRedirectHandler.redirect_request(req, fp, code, msg, hdrs, newurl)

    Return a Request or None in response to a redirect. This is called by the default implementations of the http_error_30*() methods when a redirection is received from the server. If a redirection should take place, return a new Request to allow http_error_30*() to perform the redirect to newurl. Otherwise, raise HTTPError if no other handler should try to handle this URL, or return None if you can’t but another handler might.

    Note

    The default implementation of this method does not strictly follow RFC 2616, which says that 301 and 302 responses to POST requests must not be automatically redirected without confirmation by the user. In reality, browsers do allow automatic redirection of these responses, changing the POST to a GET, and the default implementation reproduces this behavior.

    HTTPRedirectHandler.http_error_301(req, fp, code, msg, hdrs)

    Redirect to the Location: or URI: URL. This method is called by the parent OpenerDirector when getting an HTTP ‘moved permanently’ response.

    HTTPRedirectHandler.http_error_302(req, fp, code, msg, hdrs)

    The same as http_error_301(), but called for the ‘found’ response.

    HTTPRedirectHandler.http_error_303(req, fp, code, msg, hdrs)

    The same as http_error_301(), but called for the ‘see other’ response.

    HTTPRedirectHandler.http_error_307(req, fp, code, msg, hdrs)

    The same as http_error_301(), but called for the ‘temporary redirect’ response.

    21.6.5. HTTPCookieProcessor Objects

    HTTPCookieProcessor instances have one attribute:

    HTTPCookieProcessor.cookiejar

    The http.cookiejar.CookieJar in which cookies are stored.

    21.6.6. ProxyHandler Objects

    ProxyHandler.protocol_open(request)

    The ProxyHandler will have a method protocol_open() for every protocol which has a proxy in the proxies dictionary given in the constructor. The method will modify requests to go through the proxy, by calling request.set_proxy(), and call the next handler in the chain to actually execute the protocol.

    21.6.7. HTTPPasswordMgr Objects

    These methods are available on HTTPPasswordMgr and HTTPPasswordMgrWithDefaultRealm objects.

    HTTPPasswordMgr.add_password(realm, uri, user, passwd)

    uri can be either a single URI, or a sequence of URIs. realm, user and passwd must be strings. This causes (user, passwd) to be used as authentication tokens when authentication for realm and a super-URI of any of the given URIs is given.

    HTTPPasswordMgr.find_user_password(realm, authuri)

    Get user/password for given realm and URI, if any. This method will return (None, None) if there is no matching user/password.

    For HTTPPasswordMgrWithDefaultRealm objects, the realm None will be searched if the given realm has no matching user/password.

    21.6.8. AbstractBasicAuthHandler Objects

    AbstractBasicAuthHandler.http_error_auth_reqed(authreq, host, req, headers)

    Handle an authentication request by getting a user/password pair, and re-trying the request. authreq should be the name of the header where the information about the realm is included in the request, host specifies the URL and path to authenticate for, req should be the (failed) Request object, and headers should be the error headers.

    host is either an authority (e.g. "python.org") or a URL containing an authority component (e.g. "http://python.org/"). In either case, the authority must not contain a userinfo component (so, "python.org" and "python.org:80" are fine, "joe:password@python.org" is not).

    21.6.9. HTTPBasicAuthHandler Objects

    HTTPBasicAuthHandler.http_error_401(req, fp, code, msg, hdrs)

    Retry the request with authentication information, if available.

    21.6.10. ProxyBasicAuthHandler Objects

    ProxyBasicAuthHandler.http_error_407(req, fp, code, msg, hdrs)

    Retry the request with authentication information, if available.

    21.6.11. AbstractDigestAuthHandler Objects

    AbstractDigestAuthHandler.http_error_auth_reqed(authreq, host, req, headers)

    authreq should be the name of the header where the information about the realm is included in the request, host should be the host to authenticate to, req should be the (failed) Request object, and headers should be the error headers.

    21.6.12. HTTPDigestAuthHandler Objects

    HTTPDigestAuthHandler.http_error_401(req, fp, code, msg, hdrs)

    Retry the request with authentication information, if available.

    21.6.13. ProxyDigestAuthHandler Objects

    ProxyDigestAuthHandler.http_error_407(req, fp, code, msg, hdrs)

    Retry the request with authentication information, if available.

    21.6.14. HTTPHandler Objects

    HTTPHandler.http_open(req)

    Send an HTTP request, which can be either GET or POST, depending on req.has_data().

    21.6.15. HTTPSHandler Objects

    HTTPSHandler.https_open(req)

    Send an HTTPS request, which can be either GET or POST, depending on req.has_data().

    21.6.16. FileHandler Objects

    FileHandler.file_open(req)

    Open the file locally, if there is no host name, or the host name is 'localhost'.

    Changed in version 3.2: This method is applicable only for local hostnames. When a remote hostname is given, an URLError is raised.

    21.6.17. DataHandler Objects

    DataHandler.data_open(req)

    Read a data URL. This kind of URL contains the content encoded in the URL itself. The data URL syntax is specified in RFC 2397. This implementation ignores white spaces in base64 encoded data URLs so the URL may be wrapped in whatever source file it comes from. But even though some browsers don’t mind about a missing padding at the end of a base64 encoded data URL, this implementation will raise an ValueError in that case.

    21.6.18. FTPHandler Objects

    FTPHandler.ftp_open(req)

    Open the FTP file indicated by req. The login is always done with empty username and password.

    21.6.19. CacheFTPHandler Objects

    CacheFTPHandler objects are FTPHandler objects with the following additional methods:

    CacheFTPHandler.setTimeout(t)

    Set timeout of connections to t seconds.

    CacheFTPHandler.setMaxConns(m)

    Set maximum number of cached connections to m.

    21.6.20. UnknownHandler Objects

    UnknownHandler.unknown_open()

    Raise a URLError exception.

    21.6.21. HTTPErrorProcessor Objects

    HTTPErrorProcessor.http_response()

    Process HTTP error responses.

    For 200 error codes, the response object is returned immediately.

    For non-200 error codes, this simply passes the job on to the protocol_error_code() handler methods, via OpenerDirector.error(). Eventually, HTTPDefaultErrorHandler will raise an HTTPError if no other handler handles the error.

    HTTPErrorProcessor.https_response()

    Process HTTPS error responses.

    The behavior is same as http_response().

    21.6.22. Examples

    This example gets the python.org main page and displays the first 300 bytes of it.

    >>> import urllib.request >>> f = urllib.request.urlopen('http://www.python.org/') >>> print(f.read(300)) b'<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\n\n\n<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">\n\n<head>\n <meta http-equiv="content-type" content="text/html; charset=utf-8" />\n <title>Python Programming ' 

    Note that urlopen returns a bytes object. This is because there is no way for urlopen to automatically determine the encoding of the byte stream it receives from the http server. In general, a program will decode the returned bytes object to string once it determines or guesses the appropriate encoding.

    The following W3C document, http://www.w3.org/International/O-charset, lists the various ways in which a (X)HTML or a XML document could have specified its encoding information.

    As the python.org website uses utf-8 encoding as specified in it’s meta tag, we will use the same for decoding the bytes object.

    >>> with urllib.request.urlopen('http://www.python.org/') as f: ...     print(f.read(100).decode('utf-8')) ... <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtm 

    It is also possible to achieve the same result without using the context manager approach.

    >>> import urllib.request >>> f = urllib.request.urlopen('http://www.python.org/') >>> print(f.read(100).decode('utf-8')) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtm 

    In the following example, we are sending a data-stream to the stdin of a CGI and reading the data it returns to us. Note that this example will only work when the Python installation supports SSL.

    >>> import urllib.request >>> req = urllib.request.Request(url='https://localhost/cgi-bin/test.cgi', ...                       data=b'This data is passed to stdin of the CGI') >>> f = urllib.request.urlopen(req) >>> print(f.read().decode('utf-8')) Got Data: "This data is passed to stdin of the CGI" 

    The code for the sample CGI used in the above example is:

    #!/usr/bin/env python import sys data = sys.stdin.read() print('Content-type: text-plain\n\nGot Data: "%s"' % data) 

    Here is an example of doing a PUT request using Request:

    import urllib.request DATA=b'some data' req = urllib.request.Request(url='http://localhost:8080', data=DATA,method='PUT') f = urllib.request.urlopen(req) print(f.status) print(f.reason) 

    Use of Basic HTTP Authentication:

    import urllib.request # Create an OpenerDirector with support for Basic HTTP Authentication... auth_handler = urllib.request.HTTPBasicAuthHandler() auth_handler.add_password(realm='PDQ Application',                           uri='https://mahler:8092/site-updates.py',                           user='klem',                           passwd='kadidd!ehopper') opener = urllib.request.build_opener(auth_handler) # ...and install it globally so it can be used with urlopen. urllib.request.install_opener(opener) urllib.request.urlopen('http://www.example.com/login.html') 

    build_opener() provides many handlers by default, including a ProxyHandler. By default, ProxyHandler uses the environment variables named <scheme>_proxy, where <scheme> is the URL scheme involved. For example, the http_proxy environment variable is read to obtain the HTTP proxy’s URL.

    This example replaces the default ProxyHandler with one that uses programmatically-supplied proxy URLs, and adds proxy authorization support with ProxyBasicAuthHandler.

    proxy_handler = urllib.request.ProxyHandler({'http': 'http://www.example.com:3128/'}) proxy_auth_handler = urllib.request.ProxyBasicAuthHandler() proxy_auth_handler.add_password('realm', 'host', 'username', 'password')  opener = urllib.request.build_opener(proxy_handler, proxy_auth_handler) # This time, rather than install the OpenerDirector, we use it directly: opener.open('http://www.example.com/login.html') 

    Adding HTTP headers:

    Use the headers argument to the Request constructor, or:

    import urllib.request req = urllib.request.Request('http://www.example.com/') req.add_header('Referer', 'http://www.python.org/') r = urllib.request.urlopen(req) 

    OpenerDirector automatically adds a User-Agent header to every Request. To change this:

    import urllib.request opener = urllib.request.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0')] opener.open('http://www.example.com/') 

    Also, remember that a few standard headers (Content-Length, Content-Type without charset parameter and Host) are added when the Request is passed to urlopen() (or OpenerDirector.open()).

    Here is an example session that uses the GET method to retrieve a URL containing parameters:

    >>> import urllib.request >>> import urllib.parse >>> params = urllib.parse.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) >>> f = urllib.request.urlopen("http://www.musi-cal.com/cgi-bin/query?%s" % params) >>> print(f.read().decode('utf-8')) 

    The following example uses the POST method instead. Note that params output from urlencode is encoded to bytes before it is sent to urlopen as data:

    >>> import urllib.request >>> import urllib.parse >>> data = urllib.parse.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) >>> data = data.encode('utf-8') >>> request = urllib.request.Request("http://requestb.in/xrbl82xr") >>> # adding charset parameter to the Content-Type header. >>> request.add_header("Content-Type","application/x-www-form-urlencoded;charset=utf-8") >>> f = urllib.request.urlopen(request, data) >>> print(f.read().decode('utf-8')) 

    The following example uses an explicitly specified HTTP proxy, overriding environment settings:

    >>> import urllib.request >>> proxies = {'http': 'http://proxy.example.com:8080/'} >>> opener = urllib.request.FancyURLopener(proxies) >>> f = opener.open("http://www.python.org") >>> f.read().decode('utf-8') 

    The following example uses no proxies at all, overriding environment settings:

    >>> import urllib.request >>> opener = urllib.request.FancyURLopener({}) >>> f = opener.open("http://www.python.org/") >>> f.read().decode('utf-8') 

    21.6.23. Legacy interface

    The following functions and classes are ported from the Python 2 module urllib (as opposed to urllib2). They might become deprecated at some point in the future.

    urllib.request.urlretrieve(url, filename=None, reporthook=None, data=None)

    Copy a network object denoted by a URL to a local file. If the URL points to a local file, the object will not be copied unless filename is supplied. Return a tuple (filename, headers) where filename is the local file name under which the object can be found, and headers is whatever the info() method of the object returned by urlopen() returned (for a remote object). Exceptions are the same as for urlopen().

    The second argument, if present, specifies the file location to copy to (if absent, the location will be a tempfile with a generated name). The third argument, if present, is a hook function that will be called once on establishment of the network connection and once after each block read thereafter. The hook will be passed three arguments; a count of blocks transferred so far, a block size in bytes, and the total size of the file. The third argument may be -1 on older FTP servers which do not return a file size in response to a retrieval request.

    The following example illustrates the most common usage scenario:

    >>> import urllib.request >>> local_filename, headers = urllib.request.urlretrieve('http://python.org/') >>> html = open(local_filename) >>> html.close() 

    If the url uses the http: scheme identifier, the optional data argument may be given to specify a POST request (normally the request type is GET). The data argument must be a bytes object in standard application/x-www-form-urlencoded format; see the urllib.parse.urlencode() function.

    urlretrieve() will raise ContentTooShortError when it detects that the amount of data available was less than the expected amount (which is the size reported by a Content-Length header). This can occur, for example, when the download is interrupted.

    The Content-Length is treated as a lower bound: if there’s more data to read, urlretrieve reads more data, but if less data is available, it raises the exception.

    You can still retrieve the downloaded data in this case, it is stored in the content attribute of the exception instance.

    If no Content-Length header was supplied, urlretrieve can not check the size of the data it has downloaded, and just returns it. In this case you just have to assume that the download was successful.

    urllib.request.urlcleanup()

    Cleans up temporary files that may have been left behind by previous calls to urlretrieve().

    class urllib.request.URLopener(proxies=None, **x509)

    Deprecated since version 3.3.

    Base class for opening and reading URLs. Unless you need to support opening objects using schemes other than http:, ftp:, or file:, you probably want to use FancyURLopener.

    By default, the URLopener class sends a User-Agent header of urllib/VVV, where VVV is the urllib version number. Applications can define their own User-Agent header by subclassing URLopener or FancyURLopener and setting the class attribute version to an appropriate string value in the subclass definition.

    The optional proxies parameter should be a dictionary mapping scheme names to proxy URLs, where an empty dictionary turns proxies off completely. Its default value is None, in which case environmental proxy settings will be used if present, as discussed in the definition of urlopen(), above.

    Additional keyword parameters, collected in x509, may be used for authentication of the client when using the https: scheme. The keywords key_file and cert_file are supported to provide an SSL key and certificate; both are needed to support client authentication.

    URLopener objects will raise an OSError exception if the server returns an error code.

    open(fullurl, data=None)

    Open fullurl using the appropriate protocol. This method sets up cache and proxy information, then calls the appropriate open method with its input arguments. If the scheme is not recognized, open_unknown() is called. The data argument has the same meaning as the data argument of urlopen().

    open_unknown(fullurl, data=None)

    Overridable interface to open unknown URL types.

    retrieve(url, filename=None, reporthook=None, data=None)

    Retrieves the contents of url and places it in filename. The return value is a tuple consisting of a local filename and either a email.message.Message object containing the response headers (for remote URLs) or None (for local URLs). The caller must then open and read the contents of filename. If filename is not given and the URL refers to a local file, the input filename is returned. If the URL is non-local and filename is not given, the filename is the output of tempfile.mktemp() with a suffix that matches the suffix of the last path component of the input URL. If reporthook is given, it must be a function accepting three numeric parameters: A chunk number, the maximum size chunks are read in and the total size of the download (-1 if unknown). It will be called once at the start and after each chunk of data is read from the network. reporthook is ignored for local URLs.

    If the url uses the http: scheme identifier, the optional data argument may be given to specify a POST request (normally the request type is GET). The data argument must in standard application/x-www-form-urlencoded format; see the urllib.parse.urlencode() function.

    version

    Variable that specifies the user agent of the opener object. To get urllib to tell servers that it is a particular user agent, set this in a subclass as a class variable or in the constructor before calling the base constructor.

    class urllib.request.FancyURLopener(...)

    Deprecated since version 3.3.

    FancyURLopener subclasses URLopener providing default handling for the following HTTP response codes: 301, 302, 303, 307 and 401. For the 30x response codes listed above, the Location header is used to fetch the actual URL. For 401 response codes (authentication required), basic HTTP authentication is performed. For the 30x response codes, recursion is bounded by the value of the maxtries attribute, which defaults to 10.

    For all other response codes, the method http_error_default() is called which you can override in subclasses to handle the error appropriately.

    Note

    According to the letter of RFC 2616, 301 and 302 responses to POST requests must not be automatically redirected without confirmation by the user. In reality, browsers do allow automatic redirection of these responses, changing the POST to a GET, and urllib reproduces this behaviour.

    The parameters to the constructor are the same as those for URLopener.

    Note

    When performing basic authentication, a FancyURLopener instance calls its prompt_user_passwd() method. The default implementation asks the users for the required information on the controlling terminal. A subclass may override this method to support more appropriate behavior if needed.

    The FancyURLopener class offers one additional method that should be overloaded to provide the appropriate behavior:

    prompt_user_passwd(host, realm)

    Return information needed to authenticate the user at the given host in the specified security realm. The return value should be a tuple, (user, password), which can be used for basic authentication.

    The implementation prompts for this information on the terminal; an application should override this method to use an appropriate interaction model in the local environment.

    21.6.24. urllib.request Restrictions

    • Currently, only the following protocols are supported: HTTP (versions 0.9 and 1.0), FTP, local files, and data URLs.

      Changed in version 3.4: Added support for data URLs.

    • The caching feature of urlretrieve() has been disabled until someone finds the time to hack proper processing of Expiration time headers.

    • There should be a function to query whether a particular URL is in the cache.

    • For backward compatibility, if a URL appears to point to a local file but the file can’t be opened, the URL is re-interpreted using the FTP protocol. This can sometimes cause confusing error messages.

    • The urlopen() and urlretrieve() functions can cause arbitrarily long delays while waiting for a network connection to be set up. This means that it is difficult to build an interactive Web client using these functions without using threads.

    • The data returned by urlopen() or urlretrieve() is the raw data returned by the server. This may be binary data (such as an image), plain text or (for example) HTML. The HTTP protocol provides type information in the reply header, which can be inspected by looking at the Content-Type header. If the returned data is HTML, you can use the module html.parser to parse it.

    • The code handling the FTP protocol cannot differentiate between a file and a directory. This can lead to unexpected behavior when attempting to read a URL that points to a file that is not accessible. If the URL ends in a /, it is assumed to refer to a directory and will be handled accordingly. But if an attempt to read a file leads to a 550 error (meaning the URL cannot be found or is not accessible, often for permission reasons), then the path is treated as a directory in order to handle the case when a directory is specified by a URL but the trailing / has been left off. This can cause misleading results when you try to fetch a file whose read permissions make it inaccessible; the FTP code will try to read it, fail with a 550 error, and then perform a directory listing for the unreadable file. If fine-grained control is needed, consider using the ftplib module, subclassing FancyURLopener, or changing _urlopener to meet your needs.

    21.7. urllib.response — Response classes used by urllib

    The urllib.response module defines functions and classes which define a minimal file like interface, including read() and readline(). The typical response object is an addinfourl instance, which defines an info() method and that returns headers and a geturl() method that returns the url. Functions defined by this module are used internally by the urllib.request module.

     

     

  • 相关阅读:
    VBA值列选取与复制,赋值
    Processing的条件式
    VBA之四给程序自动加行号
    自上而下的语法分析
    Processing绘制四边形
    Processing的代码编写流程
    Processing编程语言简介
    follow集的求解
    Processing函数与循环
    在UBUNTU下用ruby求得网卡地址IP地址和用户名
  • 原文地址:https://www.cnblogs.com/tosee/p/4600097.html
Copyright © 2011-2022 走看看