zoukankan      html  css  js  c++  java
  • BeautifulSoup /bs4 爬虫实例

    需求:使用bs4实现将诗词名句网站中三国演义小说的每一章的内容爬去到本地磁盘进行存储 

     http://www.shicimingju.com/book/sanguoyanyi.html

     1 from bs4 import BeautifulSoup
     2 import requests
     3 
     4 url = 'http://www.shicimingju.com/book/sanguoyanyi.html'
     5 headers = {
     6     'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36'
     7 }
     8 
     9 page_text = requests.get(url=url,headers=headers).text
    10 
    11 #解析:章节的标题  详情页的url
    12 soup = BeautifulSoup(page_text,'lxml')
    13 li_list = soup.select('.book-mulu > ul > li')
    14 fp =  open('./xiaoshuo.txt','w',encoding='utf-8')
    15 for li in li_list:
    16     title = li.a.string
    17     detail_url = 'http://www.shicimingju.com'+li.a['href']
    18     
    19     #对详情页发起请求
    20     detail_page_text = requests.get(url=detail_url,headers=headers).text
    21     soup = BeautifulSoup(detail_page_text,'lxml')
    22     #返回的文本内容是一整个字符串数据
    23     text = soup.find('div',class_='chapter_content').text
    24     
    25     fp.write(title+"
    "+text)
    26 fp.close()
    27 print('over!!!')
    爬虫代码
  • 相关阅读:
    linux 学习笔记 groupadd创建组
    linux学习笔记 4建立用户
    Linux学习笔记 3 权限篇
    Linux学习笔记 1 环境变量 2 vi命令
    指针 以及取地址
    练习题
    weblogic domain creation
    hibernate log4j 输出sql
    练习九 组函数应用
    练习八 spool导出
  • 原文地址:https://www.cnblogs.com/duanhaoxin/p/10110884.html
Copyright © 2011-2022 走看看