zoukankan      html  css  js  c++  java
  • 爬虫禁止访问解决方法(403)

    小小白使用

    连接手机热点,或者换WIFI.

    增减header

    伪装浏览器请求增加header

    # 收集到的常用Header
    my_headers = [
        "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36",
        "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/537.75.14",
        "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Win64; x64; Trident/6.0)",
        'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11',
        'Opera/9.25 (Windows NT 5.1; U; en)',
        'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)',
        'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)',
        'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12',
        'Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/1.2.9',
        "Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.7 (KHTML, like Gecko) Ubuntu/11.04 Chromium/16.0.912.77 Chrome/16.0.912.77 Safari/535.7",
        "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:10.0) Gecko/20100101 Firefox/10.0 "
    ]
    
    

    代理IP

    使用不同的IP访问服务器.代理IP时效性很强,需要经常跟换,随用随找.

    
    proxy_list = [
        '183.95.80.102:8080',
        '123.160.31.71:8080',
        '115.231.128.79:8080',
        '166.111.77.32:80',
        '43.240.138.31:8080',
        '218.201.98.196:3128'
    ]
    
    
        # 随机从IP列表中选择一个IP
        proxy = random.choice(proxy_list)
        # 基于选择的IP构建连接
        urlhandle = ulb.ProxyHandler({'http': proxy})
        opener = ulb.build_opener(urlhandle)
        ulb.install_opener(opener)
     
        # 用urllib2库链接网络图像
        response = ulb.Request(url)
     
        # 打开网络图像文件句柄
        fp = ulb.urlopen(response)
    
    

    国内ip代理

    排列组合

    排列组合加请求延时

        # 设置暂停时间为0.1秒
        t = 0.1
        time.sleep(t)
    
        # 随机从列表中选择IP、Header
        proxy = random.choice(proxy_list)
        head = random.choice(head)
    
        print(proxy, head)
    
        # 基于选择的IP构建连接
        urlhandle = urllib.request.ProxyHandler({'http': proxy})
        opener = urllib.request.build_opener(urlhandle)
        urllib.request.install_opener(opener)
    
        # 增加head伪装成浏览器
        #request = urllib.request.Request(url,headers=head)
        request = urllib.request.Request(url)
        request.add_header('User-Agent', head)
        
        response = urllib.request.urlopen(request)
    
    

    详情请看:
    python 爬虫禁止访问解决方法(403)
    伪装豆瓣爬虫程序——如何解决豆瓣禁用爬虫程序?
    IP池的使用

  • 相关阅读:
    当事人在土地征收补偿纠纷中需要收集的证据
    ExtJs学习
    谈谈对于企业级系统架构的理解 转来的
    log4net详细配置 转
    基于RBAC的权限设计3
    当事人在土地征收补偿纠纷中需要收集的证据类型
    权限管理设计2
    SQL 字符串分割
    RBAC原理介绍及kasai使用分析
    权限管理设计1
  • 原文地址:https://www.cnblogs.com/thrseven/p/15169814.html
Copyright © 2011-2022 走看看