zoukankan      html  css  js  c++  java
  • 爬取全部的校园新闻

    作业链接:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE1/homework/3002

    0.从新闻url获取点击次数,并整理成函数

    • newsUrl
    • newsId(re.search())
    • clickUrl(str.format())
    • requests.get(clickUrl)
    • re.search()/.split()
    • str.lstrip(),str.rstrip()
    • int
    • 整理成函数
    • 获取新闻发布时间及类型转换也整理成函数

    1.从新闻url获取新闻详情: 字典,anews

    2.从列表页的url获取新闻url:列表append(字典) alist

    3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews

    *每个同学爬学号尾数开始的10个列表页

    4.设置合理的爬取间隔

    import time

    import random

    time.sleep(random.random()*3)

    5.用pandas做简单的数据处理并保存

    保存到csv或excel文件 

    newsdf.to_csv(r'F:duym爬虫gzccnews.csv')

    from bs4 import BeautifulSoup
    import requests
    import re
    import pandas as pd
    from datetime import datetime
    
    def htmlsurl():
        url = 'http://news.gzcc.cn/html/xiaoyuanxinwen/'
        htmlurls = []
        for i in range(20,30):
            htmlurls.append(url+str(i)+'.html')
        return htmlurls
    def getclicktime(url):
        clickurl=('http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(re.findall('(d+).html',url)[0]))
        clickhtml = requests.get(clickurl)
        clickhtml.encoding ='utf-8'
        return re.search("hits'[)].html[(]'(d+)'[)]",clickhtml.text).groups(0)[0]
    
    def getDt(FbSj):
        FbSj = ' '.join(FbSj)[5:]
        FbSj = datetime.strptime(FbSj,'%Y-%m-%d %H:%M:%S')
        return FbSj
    
    def alist():
        newsList=[]
        htmlurls = htmlsurl()
        for url in htmlurls:
            html = requests.get(url)
            html.encoding = 'utf-8'
            soup = BeautifulSoup(html.text,'html.parser')
            for news in soup.select('li'):
                if len(news.select('.news-list-title'))>0:
                    newsurl = news.select('a')[0]['href']
                    newsDict = anews(newsurl)
                    newsDict['newsUrl'] = newsurl
                    newsDict['description'] = news.select('.news-list-description')[0].text
                    newsList.append(newsDict)
        return newsList
    def anews(url):
        newsDetail ={}
        res = requests.get(url)
        res.encoding = 'utf-8'
        soup= BeautifulSoup(res.text,'html.parser')
        newsDetail['newsTitle'] = soup.select('.show-title')[0].text
        newsDetail['newsClick'] = getclicktime(url)
        FbSj = soup.select('.show-info')[0].text.split()[0:2]
        newsDetail['newsDate'] = getDt(FbSj)
        return newsDetail
    
    
    text = pd.DataFrame(alist())
    text.to_csv(r'E:gzccnews.csv')

  • 相关阅读:
    欧拉路问题
    树上依赖背包总结
    树状数组的应用
    KMP
    深探树形dp
    再探树形dp
    日常水题
    深入hash
    同一控制下的企业合并,长期股权投资成本与支付账面之间的差额计入资本公积
    资本公积冲减留存收益
  • 原文地址:https://www.cnblogs.com/Mram/p/10697540.html
Copyright © 2011-2022 走看看