zoukankan      html  css  js  c++  java
  • scrapy实现ip代理池

    首先需要在ip代理的网站爬取有用的ip,保存到数据库中

    import requests
    from scrapy.selector import Selector
    import pymysql
    
    conn = pymysql.connect(host = '127.0.0.1', user = 'root' ,passwd = 'root',db = 'mysql18_text',charset = 'utf8')
    cursor = conn.cursor()
    
    def crawl_ips():
        #爬取xici的免费ip代理
        agent = 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0'
        header = {
            'User-Agent':agent
        }
    
        for i in range(1,3458):
            reas = requests.get('http://www.xicidaili.com/nn/',headers = header)
            Selectora = Selector(reas)
            all_trs = Selectora.xpath('//table[@id="ip_list"]/tr')
            ip_list = []
            for tr in all_trs[1:]:
                spend_str = tr.xpath('./td/div[@class="bar"]/@title').extract()[0] ##提取速度
                if spend_str:
                    speed = float(spend_str.split('')[0])
                    all_text = tr.xpath('./td/text()').extract()
                    ip = all_text[0]
                    port = all_text[1]
                    proxy_type = all_text[5]
                    ip_list.append((ip,port,speed,proxy_type))
            for ip_info in ip_list:
                cursor.execute(
                    """insert project_ip(ip,port,speed,proxy_type) VALUES('{0}','{1}','{2}','HTTP')""".format(
                        ip_info[0],ip_info[1],ip_info[2]
                    )
                )
                conn.commit()
            print(ip_list)
            
            
    crawl_ips()
    
    conn.close()
    cursor.close()

    随机在数据库中获取一个ip的代码

    class GetIP(object):
        
        def delete_ip(self,ip):
            #从数据库中删除无效的ip
            delete_sql = """
                delete from project_ip where ip='{0}'
            """.format(ip)
            cursor.execute(delete_sql)
            conn.commit()
            return True
        
        
        def judge_ip(self,ip,port):
            #判断一个ip是否可用
            http_url = 'http://www.baidu.com'
            proxy_url = 'https://{0}:{1}'.format(ip,port)
            
            try:
                proxy_dict = {
                    'http':proxy_url,
                }
                requests.get(http_url,proxies = proxy_dict)
                return True
            except Exception as e:
                print("ip出现异常")
                #出现异常后就把这个ip给删除掉
                self.delete_ip(ip)
                return False
            else:
                code = response.status_code
                if code>=200 and code<300:
                    print('effective ip')
                    return True
                else:
                    print('invalid')
                    self.delete_ip(ip)
                    return False
    
    
        
        def get_random_ip(self):
            #从数据库中随机获取到一个可用的ip
            random_sql = """
                SELECT ip,port FROM project_ip
                ORDER BY RAND()
                LIMIT 1
            """
            result = cursor.execute(random_sql)
            
            for ip_info in cursor.fetchall():
                ip = ip_info[0]
                port = ip_info[1]
                judge_re = self.judge_ip(ip,port)
           if judge_re:#如果返回True
             return "http://'{0}':'{1}'".format(ip,port)
           else:
              return get_random_ip()

    Middleware动态设置ip代理

    class RandomProxyMiddleware(object):
        def process_request(self,request,spider):
            get_ip = GetIP()#这里需要导入那个函数
            request.meta['proxy'] = get_ip.get_random_ip()
    以上内容作为课堂笔记,如有雷同,请联系于我
  • 相关阅读:
    《O2O实战:他们是如何利用互联网的》.pdf
    建议收藏,mybatis插件原理详解
    《Tensorflow:实战Google深度学习框架》.pdf
    MyBatis插件原理及应用(上篇)
    《大数据算法》.pdf
    答了Mybatis这个问题后,面试官叫我回去等通知……
    《构建高性能WEB站点》.pdf
    SpringBoot 构建 Docker 镜像的最佳 3 种方式
    快速了解阿里微服务热门开源分布式事务框架——Seata
    超值干货 | 建议收藏:精美详尽的 HTTPS 原理图注意查收!
  • 原文地址:https://www.cnblogs.com/ArtisticMonk/p/9738921.html
Copyright © 2011-2022 走看看