段时间工作比较忙,博客更新的时间又慢了,前几天刚旅游回来,和部门的同事去了富春江-三清山和姚林仙境,感觉挺不错的,坐了船也爬了山。感受了大自然的秀丽景色。废话不多话,今天给大家分享个python 批量查询网站的pr的应用,前段时间因为要批量的筛选外链的资源,而外链网站的PR则是一个重要的指标,特别是对做GG的SEO的朋友来说,我们肯定是希望筛选出很多有效的而且PR高的外链资源,由于要筛选的网站比较多,只有用程序来做了。代码贴出来,如果大家感兴趣可以运行下看看,要查询的网站我这里是放到文件里,你也可以放到数据库里,然后读出来。结果也是写到文件里,同样你也可以改代码,然后把查询的结果放到数据库里。下面代码:
info.txt 一行一个网站
xxx.com
xxxx.com
然后输出的结果是:
xxx.com,1
xxx.com,3
前面是网址,或者是对应的pr,如果该网站查询失败的话,那pr=-1
我本来想用google提供的API接口,但是这个接口好像现在无法访问,所以我只有调用chinaz的查询程序 ,然后自己通过python程序去获取相关的信息。
这里主要是用到了httplib,urllib,和python的正则表达式的内容,感兴趣的朋友可以看看他们的文档和使用说明。声明:本程序只供大家学习用途,一切作为商业用途与本人无关。
# -*- coding: utf-8 -*-
import re,urllib,httplib,time
def get_url(url):
'''获取标准的url'''
host_re = re.compile(r'^https?://(.*?)($|/)',
re.IGNORECASE
)
return host_re.search(url).group(0)[7:-1]
def get_pr(url):
'''获取相关的pr'''
params = urllib.urlencode({'PRAddress':url})
headers = {"Content-type": "application/x-www-form-urlencoded",
"Accept": "text/plain",
"User-agent":"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)",
"Referer":"http://pr.chinaz.com/?PRAddress=www.baidu.com"
}
conn = httplib.HTTPConnection("pr.chinaz.com")
conn.request("GET", "", params, headers)
response = conn.getresponse()
data = response.read()
datautf8 = data.decode('utf-8')
posin = datautf8.find('enkey')
keyinfo = datautf8[posin+6:posin+38]
opener = urllib.FancyURLopener()
opener.addheaders = [
('User-agent','Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)')
]
hosturl = "http://pr.chinaz.com/ajaxsync.aspx?at=pr&enkey=%s&url=%s" % (keyinfo,url)
info = opener.open(hosturl).read()
cinfo = info.decode('utf-8').encode('gbk')
num_re = re.compile(r'[0-9]')
pr_num = num_re.search(cinfo).group(0)
print pr_num
return pr_num
f = file('pr.txt','w')
for m in file('info.txt','r'):
murl = m.strip()
# checkurl = get_url(murl)
try:
prnum = get_pr(murl)
except Exception,e:
prnum = -1
content = "%s,%s\n" % (murl,prnum)
f.write(content)
continue
else:
content = "%s,%s\n" % (murl,prnum)
f.write(content)
time.sleep(5)
f.close()
import re,urllib,httplib,time
def get_url(url):
'''获取标准的url'''
host_re = re.compile(r'^https?://(.*?)($|/)',
re.IGNORECASE
)
return host_re.search(url).group(0)[7:-1]
def get_pr(url):
'''获取相关的pr'''
params = urllib.urlencode({'PRAddress':url})
headers = {"Content-type": "application/x-www-form-urlencoded",
"Accept": "text/plain",
"User-agent":"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)",
"Referer":"http://pr.chinaz.com/?PRAddress=www.baidu.com"
}
conn = httplib.HTTPConnection("pr.chinaz.com")
conn.request("GET", "", params, headers)
response = conn.getresponse()
data = response.read()
datautf8 = data.decode('utf-8')
posin = datautf8.find('enkey')
keyinfo = datautf8[posin+6:posin+38]
opener = urllib.FancyURLopener()
opener.addheaders = [
('User-agent','Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)')
]
hosturl = "http://pr.chinaz.com/ajaxsync.aspx?at=pr&enkey=%s&url=%s" % (keyinfo,url)
info = opener.open(hosturl).read()
cinfo = info.decode('utf-8').encode('gbk')
num_re = re.compile(r'[0-9]')
pr_num = num_re.search(cinfo).group(0)
print pr_num
return pr_num
f = file('pr.txt','w')
for m in file('info.txt','r'):
murl = m.strip()
# checkurl = get_url(murl)
try:
prnum = get_pr(murl)
except Exception,e:
prnum = -1
content = "%s,%s\n" % (murl,prnum)
f.write(content)
continue
else:
content = "%s,%s\n" % (murl,prnum)
f.write(content)
time.sleep(5)
f.close()
之前还写过一个python 抓取google搜索结果的文章。你可以了解下。