zoukankan      html  css  js  c++  java
  • 简单的爬虫例子——爬取豆瓣Top250的电影的排名、名字、评分、评论数

    爬取思路:

    url从网页上把代码搞下来
    bytes decode ---> utf-8 网页内容就是我的待匹配的字符串
    ret = re.findall(正则,待匹配的字符串), ret 是所有匹配到的内容组成的列表

    import re
    import json
    from urllib.request import urlopen
    
    # (1)re.compile——爬取到文件中
    
    def getPage(url):
        response = urlopen(url)
        return response.read().decode('utf-8')
    
    
    def parsePage(s):
        com = re.compile(
            '<div class="item">.*?<div class="pic">.*?<em .*?>(?P<id>d+).*?<span class="title">(?P<title>.*?)</span>'
            '.*?<span class="rating_num" .*?>(?P<rating_num>.*?)</span>.*?<span>(?P<comment_num>.*?)评价</span>',re.S
        )
        ret = com.finditer(s)
        for i in ret:
            yield {
                "id":i.group("id"),
                "title":i.group("title"),
                "rating_num":i.group("rating_num"),
                "comment_num":i.group("comment_num"),
            }
    
    def main(num):
        url = 'https://movie.douban.com/top250?start=%s&filter=' % num
        response_html = getPage(url)
        ret = parsePage(response_html)
        print(ret)
        f = open("movie_info","a",encoding="utf-8")
    
        for obj in ret:
            print(obj)
            data = str(obj)
            f.write(data + "
    ")
        f.close()
    
    count = 0
    for i in range(10):  # 10页
        main(count)
        count += 25
    
    
    import re
    import json
    from urllib.request import urlopen

    #
    (2)re.findall——打印输出 import re import json from urllib.request import urlopen def getPage(url): response = urlopen(url) return response.read().decode('utf-8') def parsePage(s): ret = re.findall( '<div class="item">.*?<div class="pic">.*?<em .*?>(?P<id>d+).*?<span class="title">(?P<title>.*?)</span>' '.*?<span class="rating_num" .*?>(?P<rating_num>.*?)</span>.*?<span>(?P<comment_num>.*?)评价</span>',s,re.S) return ret def main(num): url = 'https://movie.douban.com/top250?start=%s&filter=' % num response_html = getPage(url) ret = parsePage(response_html) print(ret) count = 0 for i in range(10): #10页 main(count) count += 25

     正则表达式详解:

    
    
    长得丑就应该多读书。我爱学习,只爱学习,最爱学习!
  • 相关阅读:
    杭电2027
    杭电2028
    开发者所需要知道的iOS7 SDK新特性
    苹果已向这15款产品和应用宣战
    谷歌J2ObjC(Java to ObjectiveC)版本更新
    国外应用评测网站汇总
    iOS 7 UI设计官方图集
    iOS 7UI设计模板
    js中 setTimeout延时0毫秒的作用
    C# Xmlrpc
  • 原文地址:https://www.cnblogs.com/xc-718/p/9751113.html
Copyright © 2011-2022 走看看