zoukankan      html  css  js  c++  java
  • 爬数据过多经常被封IP,该怎么办呢 ?进来看看我的方法

    继续老套路,这两天我爬取了猪八戒上的一些数据 网址是:http://task.zbj.com/t-ppsj/p1s5.html,可能是由于爬取的数据量有点多吧,结果我的IP被封了,需要自己手动来验证解封ip,但这显然阻止了我爬取更多的数据了。


     

    下面是我写的爬取猪八戒的被封IP的代码

    # coding=utf-8

    importrequests

    fromlxmlimportetree

    defgetUrl():

    foriinrange(33):

    url ='http://task.zbj.com/t-ppsj/p{}s5.html'.format(i+1)

    spiderPage(url)

    defspiderPage(url):

    ifurlisNone:

    returnNone

    htmlText = requests.get(url).text

    selector = etree.HTML(htmlText)

    tds = selector.xpath('//*[@class="tab-switch tab-progress"]/table/tr')

    try:

    fortdintds:

    price = td.xpath('./td/p/em/text()')

    href = td.xpath('./td/p/a/@href')

    title = td.xpath('./td/p/a/text()')

    subTitle = td.xpath('./td/p/text()')

    deadline = td.xpath('./td/span/text()')

    price = price[0]iflen(price)>0else''# python的三目运算 :为真时的结果 if 判定条件 else 为假时的结果

    title = title[0]iflen(title)>0else''

    href = href[0]iflen(href)>0else''

    subTitle = subTitle[0]iflen(subTitle)>0else''

    deadline = deadline[0]iflen(deadline)>0else''

    printprice,title,href,subTitle,deadline

    print'---------------------------------------------------------------------------------------'

    spiderDetail(href)

    except:

    print'出错'

    defspiderDetail(url):

    ifurlisNone:

    returnNone

    try:

    htmlText = requests.get(url).text

    selector = etree.HTML(htmlText)

    aboutHref = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/div/p[1]/a/@href')

    price = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/div/p[1]/text()')

    title = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/h2/text()')

    contentDetail = selector.xpath('//*[@id="utopia_widget_10"]/div[2]/div/div[1]/div[1]/text()')

    publishDate = selector.xpath('//*[@id="utopia_widget_10"]/div[2]/div/div[1]/p/text()')

    aboutHref = aboutHref[0]iflen(aboutHref) >0else''# python的三目运算 :为真时的结果 if 判定条件 else 为假时的结果

    price = price[0]iflen(price) >0else''

    title = title[0]iflen(title) >0else''

    contentDetail = contentDetail[0]iflen(contentDetail) >0else''

    publishDate = publishDate[0]iflen(publishDate) >0else''

    printaboutHref,price,title,contentDetail,publishDate

    except:

    print'出错'

    if'_main_':

    getUrl()

    我发现代码运行完后,后面有几页数据没有被爬取,我再也没有办法去访问猪八戒网站了,等过了一段时间才能去访问他们的网站,这就很尴尬了,我得防止被封IP

    如何防止爬取数据的时候被网站封IP这里有一些套路.查了一些套路

    1.修改请求头

    之前的爬虫代码没有添加头部,这里我添加了头部,模拟成浏览器去访问网站

    user_agent ='Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.4295.400'

    headers = {'User-Agent': user_agent}

    htmlText = requests.get(url, headers=headers, proxies=proxies).text

    2.采用代理IP

    当自己的ip被网站封了之后,只能采用代理ip的方式进行爬取,所以每次爬取的时候尽量用代理ip来爬取,封了代理还有代理。

    这里我引用了这个博客的一段代码来生成ip地址:http://blog.csdn.net/lammonpeter/article/details/52917264

    生成代理ip,大家可以直接把这个代码拿去用

    # coding=utf-8

    # IP地址取自国内髙匿代理IP网站:http://www.xicidaili.com/nn/

    # 仅仅爬取首页IP地址就足够一般使用

    frombs4importBeautifulSoup

    importrequests

    importrandom

    defget_ip_list(url, headers):

    web_data = requests.get(url, headers=headers)

    soup = BeautifulSoup(web_data.text,'lxml')

    ips = soup.find_all('tr')

    ip_list = []

    foriinrange(1, len(ips)):

    ip_info = ips[i]

    tds = ip_info.find_all('td')

    ip_list.append(tds[1].text +':'+ tds[2].text)

    returnip_list

    defget_random_ip(ip_list):

    proxy_list = []

    foripinip_list:

    proxy_list.append('http://'+ ip)

    proxy_ip = random.choice(proxy_list)

    proxies = {'http': proxy_ip}

    returnproxies

    if__name__ =='__main__':

    url ='http://www.xicidaili.com/nn/'

    headers = {

    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'

    }

    ip_list = get_ip_list(url, headers=headers)

    proxies = get_random_ip(ip_list)

    print(proxies)

    好了我用上面的代码给我生成了一批ip地址(有些ip地址可能无效,但只要不封我自己的ip就可以了,哈哈),然后我就可以在我的请求头部添加ip地址

    ** 给我们的请求添加代理ip**

    proxies = {

    'http':'http://124.72.109.183:8118',

    'http':'http://49.85.1.79:31666'

    }

    user_agent ='Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.4295.400'

    headers = {'User-Agent': user_agent}

    htmlText = requests.get(url, headers=headers, timeout=3, proxies=proxies).text

    目前知道的就

    最后完整代码如下:

    # coding=utf-8

    importrequests

    importtime

    fromlxmlimportetree

    defgetUrl():

    foriinrange(33):

    url ='http://task.zbj.com/t-ppsj/p{}s5.html'.format(i+1)

    spiderPage(url)

    defspiderPage(url):

    ifurlisNone:

    returnNone

    try:

    proxies = {

    'http':'http://221.202.248.52:80',

    }

    user_agent ='Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.4295.400'

    headers = {'User-Agent': user_agent}

    htmlText = requests.get(url, headers=headers,proxies=proxies).text

    selector = etree.HTML(htmlText)

    tds = selector.xpath('//*[@class="tab-switch tab-progress"]/table/tr')

    fortdintds:

    price = td.xpath('./td/p/em/text()')

    href = td.xpath('./td/p/a/@href')

    title = td.xpath('./td/p/a/text()')

    subTitle = td.xpath('./td/p/text()')

    deadline = td.xpath('./td/span/text()')

    price = price[0]iflen(price)>0else''# python的三目运算 :为真时的结果 if 判定条件 else 为假时的结果

    title = title[0]iflen(title)>0else''

    href = href[0]iflen(href)>0else''

    subTitle = subTitle[0]iflen(subTitle)>0else''

    deadline = deadline[0]iflen(deadline)>0else''

    printprice,title,href,subTitle,deadline

    print'---------------------------------------------------------------------------------------'

    spiderDetail(href)

    exceptException,e:

    print'出错',e.message

    defspiderDetail(url):

    ifurlisNone:

    returnNone

    try:

    htmlText = requests.get(url).text

    selector = etree.HTML(htmlText)

    aboutHref = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/div/p[1]/a/@href')

    price = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/div/p[1]/text()')

    title = selector.xpath('//*[@id="utopia_widget_10"]/div[1]/div/div/h2/text()')

    contentDetail = selector.xpath('//*[@id="utopia_widget_10"]/div[2]/div/div[1]/div[1]/text()')

    publishDate = selector.xpath('//*[@id="utopia_widget_10"]/div[2]/div/div[1]/p/text()')

    aboutHref = aboutHref[0]iflen(aboutHref) >0else''# python的三目运算 :为真时的结果 if 判定条件 else 为假时的结果

    price = price[0]iflen(price) >0else''

    title = title[0]iflen(title) >0else''

    contentDetail = contentDetail[0]iflen(contentDetail) >0else''

    publishDate = publishDate[0]iflen(publishDate) >0else''

    printaboutHref,price,title,contentDetail,publishDate

    except:

    print'出错'

    if'_main_':

    getUrl()


     

    数据全部爬取出来了,且我的IP也没有被封。当然防止被封IP肯定不止这些了,这还需要进一步探索!

     
  • 相关阅读:
    js 将图片连接转换称base64格式
    mysql性能优化-慢查询分析、优化索引和配置
    MySQL集群(三)mysql-proxy搭建负载均衡与读写分离
    MySQL集群(二)之主主复制
    MySQL集群(一)之主从复制
    JavaSE(八)之Map总结
    JDBC(二)之JDBC处理CLOB和BLOB及事务与数据库元数据获取
    JavaSE(八)之Collection总结
    JavaSE集合(八)之Map
    JavaSE(八)之集合练习一
  • 原文地址:https://www.cnblogs.com/l520/p/10246530.html
Copyright © 2011-2022 走看看