zoukankan      html  css  js  c++  java
  • Python-crawler-citeulike

    之前装过beautifulsoup,这次要装lxml,用easy_install装:到python/scripts目录下,运行easy_install lxml,自动安装

    -----------分界线--------------

    之前直接用urlopen(url),拒绝访问,403forbidden

    模仿真实上网,添加cookie (转自http://www.yihaomen.com/article/python/210.htm)

    import re
    import random
    import socket
    import urllib2
    import cookielib
    from bs4 import BeautifulSoup
    import lxml
    
    ERROR = {
            '0':'Can not open the url,checck you net',
            '1':'Creat download dir error',
            '2':'The image links is empty',
            '3':'Download faild',
            '4':'Build soup error,the html is empty',
            '5':'Can not save the image to your disk',
        }
    
    class BrowserBase(object): 
    
        def __init__(self):
            socket.setdefaulttimeout(20)
            
        def speak(self,name,content):
            print '[%s]%s' %(name,content)
    
        def openurl(self,url):
            """
            打开网页
            """
            cookie_support= urllib2.HTTPCookieProcessor(cookielib.CookieJar())
            self.opener = urllib2.build_opener(cookie_support,urllib2.HTTPHandler)
            urllib2.install_opener(self.opener)
            user_agents = [
                        'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11',
                        'Opera/9.25 (Windows NT 5.1; U; en)',
                        'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)',
                        'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)',
                        'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12',
                        'Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/1.2.9',
                        "Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.7 (KHTML, like Gecko) Ubuntu/11.04 Chromium/16.0.912.77 Chrome/16.0.912.77 Safari/535.7",
                        "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:10.0) Gecko/20100101 Firefox/10.0 ",
    
                        ] 
           
            agent = random.choice(user_agents)
            self.opener.addheaders = [("User-agent",agent),("Accept","*/*"),('Referer','http://www.google.com')]
            try:
                res = self.opener.open(url)
             #   print res.read()
            except Exception,e:
                self.speak(str(e)+url)
                raise Exception
            else:
                return res
    

    ----------------分界线-------------------

    用beautifulsoup解析html文件(教程参考:http://beautifulsoup.readthedocs.org/zh_CN/latest/#)

    soup = BeautifulSoup(res, "lxml) 生成beautifulsoup对象,是一棵由html里的tag作节点的树对象。

    soup = BeautifulSoup(res,"lxml")
    tag = soup.find(id ="showtexform") #body.form( id ="showtexform")
    return tag.contents[1].contents[1]['value']
    

    beautifulsoup的搜索方法:find(),find_all():

       1. 字符串:查找与字符串完整匹配的内容,soup.find_all('b');找b标签

       2. 正则表达式:通过正则表达式的 match() 来匹配,soup.find_all(re.compile('^b'));找b打头的标签

       3. 列表

       ......

    tag的属性的操作方法与字典相同: tag['value']

    tag的 .contents 属性可以将tag的子节点以列表的方式输出

  • 相关阅读:
    PLSQL Developer使用技巧整理
    PLSQL DEVELOPER 使用的一些技巧【转】 .
    MYEclipse Available Memory is low 警告 解决方法
    myeclipse安装svn插件的多种方式
    MySql的存储过程和触发器
    springmvc学习及源码地址
    spring源码下载链接
    struts2源码下载链接
    个人总结的常用java,anroid网站
    Java生成扫描可以生成手机号名片的二维码
  • 原文地址:https://www.cnblogs.com/yuchenkit/p/5369763.html
Copyright © 2011-2022 走看看