zoukankan      html  css  js  c++  java
  • 校园新闻的爬取

    1. 获取单条新闻的#标题#链接#时间#来源#内容 #点击次数,并包装成一个函数。
    2. 获取一个新闻列表页的所有新闻的上述详情,并包装成一个函数。
    3. 获取所有新闻列表页的网址,调用上述函数。
    4. 完成所有校园新闻的爬取工作。
    5. 完成自己所选其他主题相应数据的爬取工作。
    import requests
    import re
    from bs4 import BeautifulSoup
    url = "http://news.gzcc.cn/html/xiaoyuanxinwen/index.html"
    res0 = requests.get(url)
    res0.encoding="utf-8"
    soup = BeautifulSoup(res0.text,"html.parser")
    
    
    for news in soup.select("li"):
        if len(news.select(".news-list-title"))>0:
            net=news.select("a")[0]["href"]
            title=news.select(".news-list-title")[0].text
            time=news.select(".news-list-info")[0].contents[0].text
            main=news.select(".news-list-description")[0].text
            come=news.select(".news-list-info")[0].contents[1].text
            print("网址:{}".format(net))
            print("标题:{}".format(title))
            print("时间:{}".format(time))
            print("主体:{}".format(main))
            print("来源:{}".format(come))
            ids = re.search("_(.*).html",net).group(1).split('/')[1]        
            count = requests.get("http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80".format(ids))
            print("访问次数:{}".format(int(count.text.split(".")[-1].lstrip("html('").rstrip("');"))))
            print("
    ") 
            mainer = requests.get(net)
            mainer.encoding="utf-8"
            mainers = BeautifulSoup(mainer.text,"html.parser")
            print(mainers.select(".show-content")[0].text)
    
    
    def one_new():
        for news in soup.select("li"):
            if len(news.select(".news-list-title"))>0:
                net=news.select("a")[0]["href"]
                title=news.select(".news-list-title")[0].text
                time=news.select(".news-list-info")[0].contents[0].text
                main=news.select(".news-list-description")[0].text
                come=news.select(".news-list-info")[0].contents[1].text
                print("网址:{}".format(net))
                print("标题:{}".format(title))
                print("时间:{}".format(time))
                print("主体:{}".format(main))
                print("来源:{}".format(come))
                ids = re.search("_(.*).html",net).group(1).split('/')[1]        
                count = requests.get("http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80".format(ids))
                print("访问次数:{}".format(int(count.text.split(".")[-1].lstrip("html('").rstrip("');"))))
                print("
    ")
                mainer = requests.get(net)
                mainer.encoding="utf-8"
                mainers = BeautifulSoup(mainer.text,"html.parser")
                print(mainers.select(".show-content")[0].text)
    
    
    
    line = int(soup.select(".a1")[0].text.rstrip(""))           
    page = line//10
    for i in range(1,page+1):
        url = "http://news.gzcc.cn/html/xiaoyuanxinwen/"+str(i)+".html"
        res0 = requests.get(url)
        res0.encoding="utf-8"
        soup = BeautifulSoup(res0.text,"html.parser")
        one_new()

  • 相关阅读:
    jquery operate
    ujs
    图标站
    rails foreign key
    feedback product from uservoice
    秒杀网
    short url
    rails nil blank
    paperclip imagemagic api &paperclip relevent
    类似优米网
  • 原文地址:https://www.cnblogs.com/murasame/p/7651683.html
Copyright © 2011-2022 走看看