zoukankan      html  css  js  c++  java
  • Srapy 爬取知乎用户信息

    今天用scrapy框架爬取一下所有知乎用户的信息。道理很简单,找一个知乎大V(就是粉丝和关注量都很多的那种),找到他的粉丝和他关注的人的信息,然后分别再找这些人的粉丝和关注的人的信息,层层递进,这样下来,只要有关注的人或者有粉丝的账号,几乎都能被爬下来。话不多说,进入正题。

    1、首先按照上篇博客的介绍,先建立项目,然后建一个spider文件,scrapy  genspider  zhihu  www.zhihu.com.

    进入settings.py,修改内容 ROBOTSTXT_OBEY = False,意思是知乎网页中禁止爬取的内容也可以爬的到。

    再添加一个User_agent,因为知乎是通过浏览器识别的,否则知乎会禁止爬取

    DEFAULT_REQUEST_HEADERS = {
      'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
      'Accept-Language': 'en',
      'User-agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'
    }

    2、进入知乎网页页面,搜索一个大V,这里我用的是vczh,这个账号的关注量和粉丝都特别多。

      改写zhihu.py的内容如下:

    from scrapy import Request,Spider
    
    class ZhihuSpider(Spider):
        name = 'zhihu'
        allowed_domains = ['www.zhihu.com']
        start_urls = ['http://www.zhihu.com/']
    
        def start_requests(self):        # zhihu.py 会先调用 start_requests 函数
            url = 'https://www.zhihu.com/api/v4/members/ji-he-61-7?include=allow_message%2Cis_followed%2Cis_following%2Cis_org%2Cis_blocking%2Cemployments%2Canswer_count%2Cfollower_count%2Carticles_count%2Cgender%2Cbadge%5B%3F(type%3Dbest_answerer)%5D.topics'
            yield Request(url,callback=self.parse)
    
        def parse(self, response):
            print(response.text)

    随便找一个vczh关注的人,把他的Request URL拿过来请求一下,发现正常输出了结果,说明是能够获取 vczh 自己的信息的。

     将URL换成 followees(vczh关注的人) 的Request URL,同样输出了正确的结果,说明 vczh 关注的人的信息也能正常输出。

    经过观察发现,每个用户详细信息的 URL 只有 url_token 不同,因此分别构造 user_url 和 follows_url:

    class ZhihuSpider(Spider):
        name = 'zhihu'
        allowed_domains = ['www.zhihu.com']
        start_urls = ['http://www.zhihu.com/']
    
        start_user = 'excited-vczh'
    
        user_url = 'https://www.zhihu.com/api/v4/members/{user}?include={include}'
        user_query = 'allow_message,is_followed,is_following,is_org,is_blocking,employments,answer_count,follower_count,articles_count,gender,badge[?(type=best_answerer)].topics'
    
        follows_url = 'https://www.zhihu.com/api/v4/members/{user}/followees?include={include}&offset={offset}&limit={limit}'
        follows_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'
    
        def start_requests(self):
            yield Request(self.user_url.format(user=self.start_user, include=self.user_query),callback=self.parse_user)
            yield Request(self.follows_url.format(user=self.start_user, include=self.follows_query, offset=0, limit=20),callback=self.parse_follows)
    
        def parse_user(self, response):
            result = json.loads(response.text)
            item = Zhihu3Item()
            for field in item.fields:
                if field in result.keys():
                    item[field] = result.get(field)
            yield item
    
        def parse_follows(self, response):
            results = json.loads(response.text)
            if 'data' in results.keys():
                for result in results.get('data'):
                    yield Request(self.user_url.format(user=result.get('url_token'), include=self.user_query),
                                  self.parse_user)
    
            if 'paging' in results.keys() and results.get('paging').get('is_end') == False:  # 如果当前页不是最后一页
                next_page_str = results.get('paging').get('next')
                next_page = next_page_str.replace('https://www.zhihu.com/', 'https://www.zhihu.com/api/v4/')
                # next_page = next_page_str[0:22] + 'api/v4/' + next_page_str[22:len(next_page_str)]  # 这种写法也行
                yield Request(next_page, self.parse_follows)

    结果正确输出了 vczh 和 他所关注的用户的详细信息。

    3、接下来,把 vczh 关注的用户 所 关注的用户的信息输出来:

    在  def parse_user(self, response): 函数后添加一句

    yield Request(self.follows_url.format(user=result.get('url_token'), include=self.follows_query, offset=0, limit=20),self.parse_follows) 就好了。

    4、除了获取用户关注的人以外,还要获取用户的粉丝信息。经观察,发现粉丝信息的 URL 与关注的人的 URL 类似,改写以后的最终版本如下:

    class ZhihuSpider(Spider):
        name = 'zhihu'
        allowed_domains = ['www.zhihu.com']
        start_urls = ['http://www.zhihu.com/']
    
        start_user = 'excited-vczh'
    
        user_url = 'https://www.zhihu.com/api/v4/members/{user}?include={include}'
        user_query = 'allow_message,is_followed,is_following,is_org,is_blocking,employments,answer_count,follower_count,articles_count,gender,badge[?(type=best_answerer)].topics'
    
        follows_url = 'https://www.zhihu.com/api/v4/members/{user}/followees?include={include}&offset={offset}&limit={limit}'
        follows_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'
    
        followers_url = 'https://www.zhihu.com/api/v4/members/{user}/followers?include={include}&offset={offset}&limit={limit}'
        followers_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'
    
        def start_requests(self):
            yield Request(self.user_url.format(user=self.start_user, include=self.user_query),callback=self.parse_user)
            yield Request(self.follows_url.format(user=self.start_user, include=self.follows_query, offset=0, limit=20),callback=self.parse_follows)
            yield Request(self.followers_url.format(user=self.start_user, include=self.followers_query, offset=0, limit=20),callback=self.parse_followers)
    
        def parse_user(self, response):
            result = json.loads(response.text)
            item = Zhihu3Item()
            for field in item.fields:
                if field in result.keys():
                    item[field] = result.get(field)
            yield item
    
            yield Request(self.follows_url.format(user=result.get('url_token'), include=self.follows_query, offset=0, limit=20),self.parse_follows)
            yield Request(self.followers_url.format(user=result.get('url_token'), include=self.followers_query, offset=0, limit=20),self.parse_followers)
    
        def parse_follows(self, response):
            results = json.loads(response.text)
            if 'data' in results.keys():
                for result in results.get('data'):
                    yield Request(self.user_url.format(user=result.get('url_token'), include=self.user_query),
                                  self.parse_user)
    
            if 'paging' in results.keys() and results.get('paging').get('is_end') == False:  # 如果当前页不是最后一页
                next_page_str = results.get('paging').get('next')
                next_page = next_page_str.replace('https://www.zhihu.com/', 'https://www.zhihu.com/api/v4/')
                # next_page = next_page_str[0:22] + 'api/v4/' + next_page_str[22:len(next_page_str)]  # 这种写法也行
                yield Request(next_page, self.parse_follows)
    
        def parse_followers(self, response):
            results = json.loads(response.text)
            if 'data' in results.keys():
                for result in results.get('data'):
                    yield Request(self.user_url.format(user=result.get('url_token'), include=self.user_query),
                                  self.parse_user)
    
            if 'paging' in results.keys() and results.get('paging').get('is_end') == False:  # 如果当前页不是最后一页
                next_page_str = results.get('paging').get('next')
                next_page = next_page_str.replace('https://www.zhihu.com/', 'https://www.zhihu.com/api/v4/')
                # next_page = next_page_str[0:22] + 'api/v4/' + next_page_str[22:len(next_page_str)]  # 这种写法也行
                yield Request(next_page, self.parse_followers)

    5、将输出的内容保存到MongoDb中

         跟上一篇博文类似,在pipeline.py中写入一下内容,直接上代码:

    import pymongo
    
    
    class MongoPipeline(object):
        def __init__(self, mongo_uri, mongo_db):
            self.mongo_uri = mongo_uri
            self.mongo_db = mongo_db
    
        @classmethod
        def from_crawler(cls, crawler):
            return cls(
                mongo_uri=crawler.settings.get('MONGO_URI'),
                mongo_db=crawler.settings.get('MONGO_DB')
            )
    
        def open_spider(self, spider):
            self.client = pymongo.MongoClient(self.mongo_uri)
            self.db = self.client[self.mongo_db]
    
        def process_item(self, item, spider):
            self.db['information'].update({'url_token':item['url_token']},{'$set':item},True)
            return item    # 根据'url_token'去重,True 表示如果重复就执行刷新操作,如果不重复就执行插入操作
    
        def close_spider(self, spider):
            self.client.close()

    在settings.py中写入以下代码:

    ITEM_PIPELINES = {
       'zhihu_3.pipelines.MongoPipeline': 300
    }
    MONGO_URI='localhost'
    MONGO_DB = 'zhihu_3'

    运行即可,成功保存到MongoDB中

  • 相关阅读:
    PL/SQL编辑数据"这些查询结果不可更新,请包括ROWID或使用SELECT...FOR UPDATE获得可更新结果"处理
    软件开发是什么、如何做
    HIS系统患者实体OO设计的一点思考
    Entity Framework for Oracle 基本配置
    PowerDesigner 15学习笔记:十大模型及五大分类
    手动触发dom节点事件代码
    JavaScript 继承代码中,B.prototype = new A(); 的含义是什么?[转自知乎]
    各类知识点文章收集
    偶尔遇到的“The request was aborted:Could not create SSL/TLS secure channel.”怎么解决?
    sqlserver 树结构递归(向上递归和向下递归)
  • 原文地址:https://www.cnblogs.com/zhangguoxv/p/10089782.html
Copyright © 2011-2022 走看看