zoukankan      html  css  js  c++  java
  • python模块—urllib

    1. 网页操作

    urllib.urlopen(url[,data[,proxies]])

    打开一个url,返回一个文件对象,然后可以进行类似文件对象操作

    url:远程数据的路径,即网址

    data:表示以GET或者POST方式请求url的数据
    proxes:设置代理

    urlopen返回对象提供方法:

    read() , readline() ,readlines() , fileno() , close() :这些方法的使用方式与文件对象完全一样

    info():返回一个httplib.HTTPMessage对象,表示远程服务器返回的头信息

    getcode():返回Http状态码。如果是http请求,200请求成功完成;404网址未找到

    geturl():返回请求的url

     1 >>> import urllib
     2 >>> resp = urllib.urlopen("http://www.google.com")
     3 >>> read = resp.read()   ######读取整个文件
     4 >>> print read
     5 ............此处打印整个页面代码,略之............. 
     6 >>> readline = resp.readline() ######读取一行
     7 >>> print readline
     8 <!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="zh-HK"><head><meta content="/images/branding/googleg/1x/googleg_standard_color_128dp.png" itemprop="image"><title>Google</title><script>(function(){window.google={kEI:'KhMBVtO1KuGQmgWe2ILYDQ',kEXPI:'3700263,4014829,4024207,4027916,4029815,4031109,4032235,4032500,4032678,4033307,4033344,4034882,4036527,4037333,4037569,4037934,4038012,4041302,4041323,4041440,4041507,4041837,4042160,4042180,4043255,4043411,4043457,4043459,4043491,4043564,4044246,4044336,4044339,4044343,4044606,4044852,4044864,4045003,4045414,4045711,4045717,4045764,4045841,4045871,4046059,4046121,4046304,4046340,4046606,4046717,4047133,4047530,4047599,4047668,4047751,4048048,4048125,8300200,8300203,8501987,8501992,8502156,10200083',authuser:0,kscs:'c9c918f0_10'};google.kHL='zh-HK';})();(function(){google.lc=[];google.li=0;google.getEI=function(a){for(var b;a&&(!a.getAttribute||!(b=a.getAttribute("eid")));)a=a.parentNode;return b||google.kEI};google.getLEI=function(a){for(var b=null;a&&(!a.getAttribute||!(b=a.getAttribute("leid")));)a=a.parentNode;return b};google.https=function(){return"https:"==window.location.protocol};google.ml=function(){return null};google.time=function(){return(new Date).getTime()};google.log=function(a,b,d,e,g){a=google.logUrl(a,b,d,e,g);if(""!=a){b=new Image;var c=google.lc,f=google.li;c[f]=b;b.onerror=b.onload=b.onabort=function(){delete c[f]};window.google&&window.google.vel&&window.google.vel.lu&&window.google.vel.lu(a);b.src=a;google.li=f+1}};google.logUrl=function(a,b,d,e,g){var c="",f=google.ls||"";if(!d&&-1==b.search("&ei=")){var h=google.getEI(e),c="&ei="+h;-1==b.search("&lei=")&&((e=google.getLEI(e))?c+="&lei="+e:h!=google.kEI&&(c+="&lei="+google.kEI))}a=d||"/"+(g||"gen_204")+"?atyp=i&ct="+a+"&cad="+b+c+f+"&zx="+google.time();/^http:/i.test(a)&&google.https()&&(google.ml(Error("a"),!1,{src:a,glmm:1}),a="");return a};google.y={};google.x=function(a,b){google.y[a.id]=[a,b];return!1};google.load=function(a,b,d){google.x({id:a+k++},function(){google.load(a,b,d)})};var k=0;})();var _gjwl=location;function _gjuc(){var a=_gjwl.href.indexOf("#");if(0<=a&&(a=_gjwl.href.substring(a),0<a.indexOf("&q=")||0<=a.indexOf("#q="))&&(a=a.substring(1),-1==a.indexOf("#"))){for(var d=0;d<a.length;){var b=d;"&"==a.charAt(b)&&++b;var c=a.indexOf("&",b);-1==c&&(c=a.length);b=a.substring(b,c);if(0==b.indexOf("fp="))a=a.substring(0,d)+a.substring(c,a.length),c=d;else if("cad=h"==b)return 0;d=c}_gjwl.href="/search?"+a+"&cad=h";return 1}return 0}
     9 
    10 >>> readlines = resp.readlines() ######逐行读取
    11 >>> print readlines
    12 .............................略之.................................
    13 
    14 >>> fileno = resp.fileno()  ######返回整数的底层实现使用请求从操作系统的I/O操作的文件描述符
    15 >>> print fileno
    16 3
    17 >>> info = resp.info()
    18 >>> print info
    19 Date: Tue, 22 Sep 2015 08:36:58 GMT
    20 Expires: -1
    21 Cache-Control: private, max-age=0
    22 Content-Type: text/html; charset=Big5
    23 P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
    24 Server: gws
    25 X-XSS-Protection: 1; mode=block
    26 X-Frame-Options: SAMEORIGIN
    27 Set-Cookie: PREF=ID=1111111111111111:FF=0:NW=1:TM=1442911018:LM=1442911018:V=1:S=mEKtFlZwAwUmso5J; expires=Thu, 31-Dec-2015 16:02:17 GMT; path=/; domain=.google.com.hk
    28 Set-Cookie: NID=71=CqIYFwoA4vkAAu_Zu8X_IEElnFUbjPTO-BHG1zq9MjE6GQZbQX7ZArRNmDPL0p0cmSBu3GEX_H4S_DqGfFSYUzSWzlQhKGp16nNK2kP25iJwWrCxmxdJ_2xbFvwhhkSxQrdC5g; expires=Wed, 23-Mar-2016 08:36:58 GMT; path=/; domain=.google.com.hk; HttpOnly
    29 Accept-Ranges: none
    30 Vary: Accept-Encoding
    31 
    32 >>> statuscode = resp.getcode()
    33 >>> print statuscode
    34 200
    35 >>> url = resp.geturl()
    36 >>> print url
    37 http://www.google.com.hk/?gws_rd=cr
    View Code

    2. 下载

    urllib.urlretrieve(url[,filename[,reporthook[,data]]])

    urlretrieve方法将url定位到的html文件下载到你本地的硬盘中。如果不指定filename,则会存为临时文件

    urlretrieve()返回一个二元组(filename,mine_hdrs),filename表示保存到本地的路径,mine_hdrs返回一个httplib.HTTPMessage实例,表示服务器的响应头

    filename:指定了保存到本地的路径(如果未指定该参数,urllib会生成一个临时文件来保存数据)
    reporthook:是一个回调函数,当连接上服务器、以及相应的数据块传输完毕的时候会触发该回调。我们可以利用这个回调函 数来显示当前的下载进度
    data:指POST或者GET到服务器的数据

    1 >>> filename = urllib.urlretrieve('http://www.google.com.hk/')
    2 >>> type(filename)
    3 <type 'tuple'>
    4 >>> filename[0]
    5 '/tmp/tmp8eVLjq'
    6 >>> filename[1]
    7 <httplib.HTTPMessage instance at 0xb6a363ec>
    临时存放
    1 >>> filename = urllib.urlretrieve('http://www.google.com.hk/',filename='/home/dzhwen/python文件/Homework/urllib/google.html')
    2 >>> type(filename)
    3 <type 'tuple'>
    4 >>> filename[0]
    5 '/home/dzhwen/python\xe6\x96\x87\xe4\xbb\xb6/Homework/urllib/google.html'
    6 >>> filename[1]
    7 <httplib.HTTPMessage instance at 0xb6e2c38c> 
    保存为本地
     1 import urllib
     2 
     3 def cbk(a,b,c):
     4     '''回调函数
     5     @a: 已经下载的数据块
     6     @b: 数据块的大小
     7     @c: 远程文件的大小
     8     '''
     9     per = 100.0 * a * b / c
    10     if per > 100:
    11         per = 100
    12     print '%.2f%%' % per
    13 url = 'http://www.sina.com.cn'
    14 local = '/py/sina.html'
    15 urllib.urlretrieve(url, local, cbk)
    下载进度实例
    urllib.urlcleanup()

    清除由于urllib.urlretrieve()所产生的缓存

    3. 编码与解码

    urllib.quote(url)和urllib.quote_plus(url)

    将url数据获取之后,并将其编码,从而适用与URL字符串中,使其能被打印和被web服务器接受

    1 >>> urllib.quote('http://www.baidu.com')
    2 'http%3A//www.baidu.com'
    3 >>> urllib.quote_plus('http://www.baidu.com')
    4 'http%3A%2F%2Fwww.baidu.com'
    View Code
    urllib.unquote(url)和urllib.unquote_plus(url)

    与urllib.quote(url)和urllib.quote_plus(url)函数相反

    urllib.urlencode(query)

    将URL中的键值对以连接符&划分

    这里可以与urlopen结合以实现get方法和post方法:

    1 >>> import urllib
    2 >>> params=urllib.urlencode({'spam':1,'eggs':2,'bacon':0})
    3 >>> params
    4 'eggs=2&bacon=0&spam=1'
    5 >>> f=urllib.urlopen("http://python.org/query?%s" % params)
    6 >>> print f.read()
    GET方法
    1 >>> import urllib
    2 >>> parmas = urllib.urlencode({'spam':1,'eggs':2,'bacon':0})
    3 >>> f=urllib.urlopen("http://python.org/query",parmas)
    4 >>> f.read()
    POST方法 

    补充: 

    quote(s, safe='/')

    对字符串进行编码,参数safe指定了不需要编码的字符,默认不编码字符/。

    pathname2url(pathname)

    将本地路径转换为url路经

    url2pathname(pathname)

    将url路经转换为本地路径

  • 相关阅读:
    程序员需要的各种PDF格式电子书【附网盘免费下载资源地址】
    Web安全大揭秘
    tar 压缩解压命令详解
    django开发项目的部署nginx
    CentOS7安装mysql-python模块
    我的博客站点上线了
    2006
    centos7安装pip
    mysql删除匿名用户
    FilenameFilter 文件名过滤
  • 原文地址:https://www.cnblogs.com/andr01la/p/5146310.html
Copyright © 2011-2022 走看看