作业来源:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE1/homework/3002
0.从新闻url获取点击次数,并整理成函数
- newsUrl
- newsId(re.search())
- clickUrl(str.format())
- requests.get(clickUrl)
- re.search()/.split()
- str.lstrip(),str.rstrip()
- int
- 整理成函数
- 获取新闻发布时间及类型转换也整理成函数
import re url='http://news.gzcc.cn/html/2019/xiaoyuanxinwen_0320/11029.html' clickurl='http://oa.gzcc.cn/api.php?op=count&id=11029&modelid=80'
re.match('http://news.gzcc.cn/html/2019/xiaoyuanxinwen_0320/(.*).html',url)
re.match('http://news.gzcc.cn/html/2019/xiaoyuanxinwen_0320/(.*).html',url).groups(0)
re.search('/(d*).html',url).groups(1)
re.findall('(d+)',url)
结果如下:
1.从新闻url获取新闻详情: 字典,anews
import requests from bs4 import BeautifulSoup from datetime import datetime import re def click(url): id=re.findall('(d{1,5})',url)[-1] clickUrl = 'http://oa.gzcc.cn/api.php?op=count&id=11029&modelid=80'.format(id) resClick = requests.get(clickUrl) newsClick = int(resClick.text.split('.html')[-1].lstrip("('").rstrip("');")) return newsClick def newsdt(showinfo): newsDate = showinfo.split()[0].split(':')[1] newsTime = showinfo.split()[1] newsDT = newsDate+' '+newsTime dt = datetime.strptime(newsDT, '%Y-%m-%d %H:%M:%S') return dt def anews(url): newsDetail = {} res = requests.get(url) res.encoding ='utf-8' soup = BeautifulSoup(res.text,'html.parser') newsDetail['nenewsTitle'] =soup.select('.show-title')[0].text showinfo = soup.select('.show-info')[0].text newsDetail['newsDT']=newsdt(showinfo) newsDetail['newsClick'] =click(newsUrl) return newsDetail newsUrl='http://news.gzcc.cn/html/2005/xiaoyuanxinwen_0710/4.html' anews(newsUrl)
结果:
2.从列表页的url获取新闻url:列表append(字典) alist
- 获取列表数据
listurl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/' res = requests.get(listurl) res.encoding ='utf-8' soupn = BeautifulSoup(res.text,'html.parser') # a=soupn.select('a') soupn
2.过滤过滤数据,只获取列表的新闻信息
for news in soupn.select('li'): if news.select('.news-list-title'): print(news) newsUrl = news.a['href'] print(news.a['href'])
3.获取整页信息
def alist(listUrl): res = requests.get(listurl) res.encoding ='utf-8' soup = BeautifulSoup(res.text,'html.parser') newsList =[] for news in soupn.select('li'): if len(news.select('.news-list-title'))>0: newsUrl = news.select('a')[0]['href'] newsDesc = news.select('.news-list-description')[0].text newsDict = anews(newsUrl) newsDict['newsUrl'] = newsUrl newsDict['description'] = newsDesc newsList.append(newsDict) return newsList listUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/' alist(listUrl)
3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews
*每个同学爬学号尾数开始的10个列表页
- 获取多页信息
- 截取以学号尾数开始的10个列表页
listUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/' allnews = alist(listUrl) for i in range(7,17): #学号为7,截取10页 listUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html'.format(i) allnews.extend(alist(listUrl)) len(allnews)
4.设置合理的爬取间隔
import time
import random
time.sleep(random.random()*3)
import time import random listUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/' allnews = alist(listUrl) for i in range(1,170): #学号为7,截取10页 listUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html'.format(i) allnews.extend(alist(listUrl)) time.sleep(random.random()*3) #设置每3秒爬取一次
print(alist(listUrl)) len(allnews)
5.用pandas做简单的数据处理并保存
保存到csv或excel文件
newsdf.to_csv(r'F:duym爬虫gzccnews.csv')
- 使用pandas函数整理爬取的数据
- 列表的形式打印数据
- 显示 “newsClick” 游览次数大于2337的新闻
- 生成csv文件
newsdf.to_csv(r'E:gzcc.csv')