zoukankan      html  css  js  c++  java
  • UA池和代理池

    使用UA池和代理池的目的是: 防止爬取网站的反爬虫策略.那么UA池和代理池在scrapy框架中是如何应用的呢?

    我们先了解下scrapy的下载中间件.

    scrapy框架图:

     下载中间件(Downloader Middlewares) 是位于scrapy引擎和下载器之间的一层组件。

    - 作用:

    (1)引擎将请求传递给下载器过程中, 下载中间件可以对请求进行一系列处理。比如设置请求的 User-Agent,设置代理等

    (2)在下载器完成将Response传递给引擎中,下载中间件可以对响应进行一系列处理。比如进行gzip解压等。

    我们主要使用下载中间件处理请求,一般会对请求设置随机的User-Agent ,设置随机的代理。

    UA池: User-Agent池

    - 作用:尽可能多的将scrapy工程中的请求伪装成不同类型的浏览器身份。

    代理池

    - 作用:尽可能多的将scrapy工程中的请求的IP设置成不同的。

    代码展示:

    from scrapy import signals
    import random
    class MiddleproDownloaderMiddleware(object):
        # Not all methods need to be defined. If a method is not defined,
        # scrapy acts as if the downloader middleware does not modify the
        # passed objects.
        user_agent_list = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
            "(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
            "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
            "(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "
            "(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "
            "(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "
            "(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "
            "(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "
            "(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
            "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
            "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        ]
        PROXY_http = [
            '153.180.102.104:80',
            '195.208.131.189:56055',
        ]
        PROXY_https = [
            '120.83.49.90:9000',
            '95.189.112.214:35508',
        ]
        @classmethod
        def from_crawler(cls, crawler):
            # This method is used by Scrapy to create your spiders.
            s = cls()
            crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
            return s
    
        #可以处理拦截到所有的非异常的请求
        #spider参数表示的就是爬虫类实例化的一个对象
        def process_request(self, request, spider):
            print('this is process_request()')
            #UA伪装
            request.headers['User-Agent'] = random.choice(self.user_agent_list)
    
            #代理设置
            if request.url.split(":")[0] == 'https'
           request.meta['proxy'] = 'https://' + random.choice(self.PROXY_https)
         else:
           request.meta['proxy'] = 'http://' + random.choice(self.PROXY_http)
            return None
  • 相关阅读:
    Maven中使用描述文件切换环境配置
    整合MyBatis到Spring中实现Dao层自动装配
    使用MyBatis搭建项目时报 java.io.IOException: Could not find resource
    数据库CPU占用高排查
    JS 根据时区获取时间
    国外服务器 winserver2012 安装IIS后,安装urlrewrite模块总是自动停止应用程序池
    sql中char(9) char(10) char(13)
    通过 Microsoft.Ace.OLEDB 接口导入 EXCEL 到SQLSERVER
    SDL 当前连接查询脚本
    C# System.Drawing.Graphics 画图后,如何保存一个低质量的图片,一个占用空间较小的图片
  • 原文地址:https://www.cnblogs.com/yaraning/p/10834648.html
Copyright © 2011-2022 走看看