zoukankan      html  css  js  c++  java
  • 利用requestespyqueryBeautifulSoup爬取某租房公寓(深圳市)4755条租房信息及总结

    为了分析深圳市所有长租、短租公寓的信息,爬取了某租房公寓网站上深圳区域所有在租公寓信息,网站上租房信息共有258页,每页有20条租房信息(第258页为13条),以下记录了爬取过程以及爬取过程中遇到的问题:

    爬取流程:

                                

    爬取代码:

     1 import requests
     2 from requests.exceptions import RequestException
     3 from pyquery import PyQuery as pq
     4 from bs4 import BeautifulSoup
     5 import pymongo
     6 from config import *
     7 from multiprocessing import Pool
     8 
     9 client = pymongo.MongoClient(MONGO_URL)    # 申明连接对象
    10 db = client[MONGO_DB]    # 申明数据库
    11 
    12 def get_one_page_html(url):    # 获取网站每一页的html
    13     headers = {
    14         "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
    15                       "Chrome/85.0.4183.121 Safari/537.36"
    16     }
    17     try:
    18         response = requests.get(url, headers=headers)
    19         if response.status_code == 200:
    20             return response.text
    21         else:
    22             return None
    23     except RequestException:
    24         return None
    25 
    26 
    27 def get_room_url(html):    # 获取当前页面上所有room_info的url
    28     doc = pq(html)
    29     room_urls = doc('.r_lbx .r_lbx_cen .r_lbx_cena a').items()
    30     return room_urls
    31 
    32 
    33 def parser_room_page(room_html):
    34     soup = BeautifulSoup(room_html, 'lxml')
    35     title = soup.h1.text
    36     price = soup.find('div', {'class': 'room-price-sale'}).text[:-3]
    37     x = soup.find_all('div', {'class': 'room-list'})
    38     area = x[0].text[7:-11]    # 面积
    39     bianhao = x[1].text[4:]
    40     house_type = x[2].text.strip()[3:7]    # 户型
    41     floor = x[5].text[4:-2]    # 楼层
    42     location1 = x[6].find_all('a')[0].text    # 分区
    43     location2 = x[6].find_all('a')[1].text
    44     location3 = x[6].find_all('a')[2].text
    45     subway = x[7].text[4:]
    46     addition = soup.find_all('div', {'class': 'room-title'})[0].text
    47     yield {
    48         'title': title,
    49         'price': price,
    50         'area': area,
    51         'bianhao': bianhao,
    52         'house_type': house_type,
    53         'floor': floor,
    54         'location1': location1,
    55         'location2': location2,
    56         'location3': location3,
    57         'subway': subway,
    58         'addition': addition
    59     }
    60 
    61 
    62 def save_to_mongo(result):
    63     if db[MONGO_TABLE].insert_one(result):
    64         print('存储到mongodb成功', result)
    65         return True
    66     return False
    67 
    68 
    69 def main(page):
    70     url = 'http://www.xxxxx.com/room/sz?page=' + str(page)    # url就不粘啦,嘻嘻
    71     html = get_one_page_html(url)
    72     room_urls = get_room_url(html)
    73     for room_url in room_urls:
    74         room_url_href = room_url.attr('href')
    75         room_html = get_one_page_html(room_url_href)
    76         if room_html is None:    # 非常重要,否则room_html为None时会报错
    77             pass
    78         else:
    79             results = parser_room_page(room_html)
    80             for result in results:
    81                 save_to_mongo(result)
    82 
    83 if __name__ == '__main__':
    84     pool = Pool()  # 使用多进程提高爬取效率
    85     pool.map(main, [i for i in range(1, 258)])

    在写爬取代码过程中遇到了两个问题:

    (一)在get_room_url(html)函数中,开始是想直接return每个租房信息的room_url,但是return不同于print,函数运行到return时就会结束该函数,这样就只能返回每页第一个租房room_url。解决办法是:return 包含每页所有room_url的generator生成器,在main函数中用for循环遍历,再从每个room_url中获取href,传入到get_one_page_html(room_url_href)中进行解析。

    (二)没有写第76行的if语句,我默认get_one_page_html(room_url_href)返回的room_html不为空,因此出现multiprocessing.pool.RemoteTraceback报错:

     上图中显示markup为None情况下报错,点击蓝色"F:ProgramFilesanaconda3libsite-packagess4\__init__.py"发现markup为room_html,即部分room_html出现None情况。要解决这个问题,必须让代码跳过room_html is None的情况,因此添加 if 语句解决了这个问题。

    最终成功爬取某租房公寓深圳市258页共4755条租房信息,为下一步进行数据分析做准备。

     其中单条信息:

  • 相关阅读:
    Yield Usage Understanding
    Deadclock on calling async methond
    How to generate file name according to datetime in bat command
    Run Unit API Testing Which Was Distributed To Multiple Test Agents
    druid的关键参数+数据库连接池运行原理
    修改idea打开新窗口的默认配置
    spring boot -thymeleaf-url
    @pathvariable和@RequestParam的区别
    spring boot -thymeleaf-域对象操作
    spring boot -thymeleaf-遍历list和map
  • 原文地址:https://www.cnblogs.com/chang2021/p/14021768.html
Copyright © 2011-2022 走看看