zoukankan      html  css  js  c++  java
  • 简单的爬虫例子——爬取豆瓣Top250的电影的排名、名字、评分、评论数

    爬取思路:

    url从网页上把代码搞下来
    bytes decode ---> utf-8 网页内容就是我的待匹配的字符串
    ret = re.findall(正则,待匹配的字符串), ret 是所有匹配到的内容组成的列表

    import re
    import json
    from urllib.request import urlopen
    
    # (1)re.compile——爬取到文件中
    
    def getPage(url):
        response = urlopen(url)
        return response.read().decode('utf-8')
    
    
    def parsePage(s):
        com = re.compile(
            '<div class="item">.*?<div class="pic">.*?<em .*?>(?P<id>d+).*?<span class="title">(?P<title>.*?)</span>'
            '.*?<span class="rating_num" .*?>(?P<rating_num>.*?)</span>.*?<span>(?P<comment_num>.*?)评价</span>',re.S
        )
        ret = com.finditer(s)
        for i in ret:
            yield {
                "id":i.group("id"),
                "title":i.group("title"),
                "rating_num":i.group("rating_num"),
                "comment_num":i.group("comment_num"),
            }
    
    def main(num):
        url = 'https://movie.douban.com/top250?start=%s&filter=' % num
        response_html = getPage(url)
        ret = parsePage(response_html)
        print(ret)
        f = open("movie_info","a",encoding="utf-8")
    
        for obj in ret:
            print(obj)
            data = str(obj)
            f.write(data + "
    ")
        f.close()
    
    count = 0
    for i in range(10):  # 10页
        main(count)
        count += 25
    
    
    import re
    import json
    from urllib.request import urlopen

    #
    (2)re.findall——打印输出 import re import json from urllib.request import urlopen def getPage(url): response = urlopen(url) return response.read().decode('utf-8') def parsePage(s): ret = re.findall( '<div class="item">.*?<div class="pic">.*?<em .*?>(?P<id>d+).*?<span class="title">(?P<title>.*?)</span>' '.*?<span class="rating_num" .*?>(?P<rating_num>.*?)</span>.*?<span>(?P<comment_num>.*?)评价</span>',s,re.S) return ret def main(num): url = 'https://movie.douban.com/top250?start=%s&filter=' % num response_html = getPage(url) ret = parsePage(response_html) print(ret) count = 0 for i in range(10): #10页 main(count) count += 25

     正则表达式详解:

    
    
    长得丑就应该多读书。我爱学习,只爱学习,最爱学习!
  • 相关阅读:
    人工大脑项目 —— Nengo
    四种聚类方法之比较
    对淘宝一些规则的一些研究分享
    【转】千万不要在夏季开发苹果应用,否则后果很严重
    Windows2003上使用IIS7 Express使用FastCgi运行php
    傅立叶变换最直白最容易理解最直接最真实最有深度的解释
    音视频技术应用(18) 控制播放进度——av_seek_frame()
    MySQL 中如何定位 DDL 被阻塞的问题
    windows 编译版本异常处理
    R语言的前世今生
  • 原文地址:https://www.cnblogs.com/xc-718/p/9751113.html
Copyright © 2011-2022 走看看