zoukankan      html  css  js  c++  java
  • 一致性哈希算法 应用场景(转)

    原创文章,转载请注明: 转载自LANCEYAN.COM

    本文链接地址: 一致性hash和solr千万级数据分布式搜索引擎中的应用

    互联网创业中大部分人都是草根创业,这个时候没有强劲的服务器,也没有钱去买很昂贵的海量数据库。在这样严峻的条件下,一批又一批的创业者从创业中获得成 功,这个和当前的开源技术、海量数据架构有着必不可分的关系。比如我们使用mysql、nginx等开源软件,通过架构和低成本服务器也可以搭建千万级用 户访问量的系统。新浪微博、淘宝网、腾讯等大型互联网公司都使用了很多开源免费系统搭建了他们的平台。所以,用什么没关系,只要能够在合理的情况下采用合 理的解决方案。

    那怎么搭建一个好的系统架构呢?这个话题太大,这里主要说一下数据分流的方式。比如我们的数据库服务器只能存储200个数据,突然要搞一个活动预估达到600个数据。
    可以采用两种方式:横向扩展或者纵向扩展。
    纵向扩展是升级服务器的硬件资源。但是随着机器的性能配置越高,价格越高,这个代价对于一般的小公司是承担不起的。
    横向扩展是采用多个廉价的机器提供服务。这样一个机器只能处理200个数据、3个机器就可以处理600个数据了,如果以后业务量增加还可以快速配置增加。在大多数情况都选择横向扩展的方式。如下图:
    图1

    图2

    现在有个问题了,这600个数据如何路由到对应的机器。需要考虑如果均衡分配,假设我们600个数据都是统一的自增id数据,从1~600,分成3 堆可以采用 id mod 3的方式。其实在真实环境可能不是这种id是字符串。需要把字符串转变为hashcode再进行取模。

    目前看起来是不是解决我们的问题了,所有数据都很好的分发并且没有达到系统的负载。但如果我们的数据需要存储、需要读取就没有这么容易了。业务增多 怎么办,大家按照上面的横向扩展知道需要增加一台服务器。但是就是因为增加这一台服务器带来了一些问题。看下面这个例子,一共9个数,需要放到2台机器 (1、2)上。各个机器存放为:1号机器存放1、3、5、7、9 ,2号机器存放 2、4、6、8。如果扩展一台机器3如何,数据就要发生大迁移,1号机器存放1、4、7, 2号机器存放2、5、8, 3号机器存放3、6、9。如图:

    图3
    从图中可以看出 1号机器的3、5、9迁移出去了、2好机器的4、6迁移出去了,按照新的秩序再重新分配了一遍。数据量小的话重新分配一遍代价并不大,但如果我们拥有上 亿、上T级的数据这个操作成本是相当的高,少则几个小时多则数天。并且迁移的时候原数据库机器负载比较高,那大家就有疑问了,是不是这种水平扩展的架构方 式不太合理?

    —————————–华丽分割线—————————————

    一致性hash就是在这种应用背景提出来的,现在被广泛应用于分布式缓存,比如memcached。下面简单介绍下一致性hash的基本原理。最早 的版本 http://dl.acm.org/citation.cfm?id=258660。国内网上有很多文章都写的比较好。如: http://blog.csdn.net/x15594/article/details/6270242

    下面简单举个例子来说明一致性hash。

    准备:1、2、3 三台机器
    还有待分配的9个数 1、2、3、4、5、6、7、8、9
    一致性hash算法架构

    步骤
    一、构造出来 2的32次方 个虚拟节点出来,因为计算机里面是01的世界,进行划分时采用2的次方数据容易分配均衡。另 2的32次方是42亿,我们就算有超大量的服务器也不可能超过42亿台吧,扩展和均衡性都保证了。
    一致性hash
    二、将三台机器分别取IP进行hashcode计算(这里也可以取hostname,只要能够唯一区别各个机器就可以了),然后映射到2的32次方上去。 比如1号机器算出来的hashcode并且mod (2^32)为 123(这个是虚构的),2号机器算出来的值为 2300420,3号机器算出来为 90203920。这样三台机器就映射到了这个虚拟的42亿环形结构的节点上了。
    图5
    三、将数据(1-9)也用同样的方法算出hashcode并对42亿取模将其配置到环形节点上。假设这几个节点算出来的值为 1:10,2:23564,3:57,4:6984,5:5689632,6:86546845,7:122,8:3300689,9:135468。可 以看出 1、3、7小于123, 2、4、9 小于 2300420 大于 123, 5、6、8 大于 2300420 小于90203920。从数据映射到的位置开始顺时针查找,将数据保存到找到的第一个Cache节点上。如果超过2^32仍然找不到Cache节点,就会 保存到第一个Cache节点上。也就是1、3、7将分配到1号机器,2、4、9将分配到2号机器,5、6、8将分配到3号机器。
    图6
    这个时候大家可能会问,我到现在没有看见一致性hash带来任何好处,比传统的取模还增加了复杂度。现在马上来做一些关键性的处理,比如我们增加一台机 器。按照原来我们需要把所有的数据重新分配到四台机器。一致性hash怎么做呢?现在4号机器加进来,他的hash值算出来取模后是12302012。 5、8 大于2300420 小于12302012 ,6 大于 12302012 小于90203920 。这样调整的只是把5、8从3号机器删除,4号机器中加入 5、6。
    图7
    同理,删除机器怎么做呢,假设2号机器挂掉,受影响的也只是2号机器上的数据被迁移到离它节点,上图为4号机器。
    图8
    大家应该明白一致性hash的基本原理了吧。不过这种算法还是有缺陷,比如在机器节点比较少、数据量大的时候,数据的分布可能不是很均衡,就会导致其中一 台服务器的数据比其他机器多很多。为了解决这个问题,需要引入虚拟服务器节点的机制。如我们一共有只有三台机器,1、2、3。但是实际又不可能有这么多机 器怎么解决呢?把 这些机器各自虚拟化出来3台机器,也就是 1a 1b 1c 2a 2b 2c 3a 3b 3c,这样就变成了9台机器。实际 1a 1b 1c 还是对应1。但是实际分布到环形节点就变成了9台机器。数据分布也就能够更分散一点。如图:
    图91

    写了这么多一致性hash,这个和分布式搜索有什么半点关系?我们现在使用solr4搭建了分布式搜索,测试了基于solrcloud的分布式平台 提交20条数据居然需要几十秒,所以就废弃了solrcloud。采用自己hack solr平台,不用zookeeper做分布式一致性管理平台,自己管理数据的分发机制。既然需要自己管理数据的分发,就需要考虑到索引的创建,索引的更 新。这样我们的一致性hash也就用上了。整体架构如下图:

    图10
    建立和更新需要维持机器的位置,能够根据数据的key找到对应的数据分发并更新。这里需要考虑的是如何高效、可靠的把数据建立、更新到索引里。
    备份服务器防止建立服务器挂掉,可以根据备份服务器快速恢复。
    读服务器主要做读写分离使用,防止写索引影响查询数据。
    集群管理服务器管理整个集群内的服务器状态、告警。

    整个集群随着业务增多还可以按照数据的类型划分,比如用户、微博等。每个类型按照上图架构搭建,就可以满足一般性能的分布式搜索。对于solr和分布式搜索的话题后续再聊。

    扩展阅读:
    java的hashmap随着数据量的增加也会出现map调整的问题,必要的时候就初始化足够大的size以防止容量不足对已有数据进行重新hash计算。

    疫苗:Java HashMap的死循环 http://coolshell.cn/articles/9606.html
    一致性哈希算法的优化—-关于如何保正在环中增加新节点时,命中率不受影响 (原拍拍同事scott)http://scottina.iteye.com/blog/650380

    语言实现:
    http://weblogs.java.net/blog/2007/11/27/consistent-hashing java 版本的例子
    http://blog.csdn.net/mayongzhan/archive/2009/06/25/4298834.aspx PHP 版的例子
    http://www.codeproject.com/KB/recipes/lib-conhash.aspx C语言版本例子

     
    ---------------------------------------------------------------------java部分 国外原文-----------------------------------------------------------

    I've bumped into consistent hashing a couple of times lately. The paper that introduced the idea (Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web by David Karger et al) appeared ten years ago, although recently it seems the idea has quietly been finding its way into more and more services, from Amazon's Dynamo to memcached (courtesy of Last.fm). So what is consistent hashing and why should you care?

    The need for consistent hashing arose from limitations experienced while running collections of caching machines - web caches, for example. If you have a collection of n cache machines then a common way of load balancing across them is to put object o in cache machine number hash(o) mod n. This works well until you add or remove cache machines (for whatever reason), for then n changes and every object is hashed to a new location. This can be catastrophic since the originating content servers are swamped with requests from the cache machines. It's as if the cache suddenly disappeared. Which it has, in a sense. (This is why you should care - consistent hashing is needed to avoid swamping your servers!)

    It would be nice if, when a cache machine was added, it took its fair share of objects from all the other cache machines. Equally, when a cache machine was removed, it would be nice if its objects were shared between the remaining machines. This is exactly what consistent hashing does - consistently maps objects to the same cache machine, as far as is possible, at least.

    The basic idea behind the consistent hashing algorithm is to hash both objects and caches using the same hash function. The reason to do this is to map the cache to an interval, which will contain a number of object hashes. If the cache is removed then its interval is taken over by a cache with an adjacent interval. All the other caches remain unchanged.

    Demonstration

    Let's look at this in more detail. The hash function actually maps objects and caches to a number range. This should be familiar to every Java programmer - the hashCode method on Object returns an int, which lies in the range -231 to 231-1. Imagine mapping this range into a circle so the values wrap around. Here's a picture of the circle with a number of objects (1, 2, 3, 4) and caches (A, B, C) marked at the points that they hash to (based on a diagram from Web Caching with Consistent Hashing by David Karger et al):

    consistent_hashing_1.png

    To find which cache an object goes in, we move clockwise round the circle until we find a cache point. So in the diagram above, we see object 1 and 4 belong in cache A, object 2 belongs in cache B and object 3 belongs in cache C. Consider what happens if cache C is removed: object 3 now belongs in cache A, and all the other object mappings are unchanged. If then another cache D is added in the position marked it will take objects 3 and 4, leaving only object 1 belonging to A.

    consistent_hashing_2.png

    This works well, except the size of the intervals assigned to each cache is pretty hit and miss. Since it is essentially random it is possible to have a very non-uniform distribution of objects between caches. The solution to this problem is to introduce the idea of "virtual nodes", which are replicas of cache points in the circle. So whenever we add a cache we create a number of points in the circle for it.

    You can see the effect of this in the following plot which I produced by simulating storing 10,000 objects in 10 caches using the code described below. On the x-axis is the number of replicas of cache points (with a logarithmic scale). When it is small, we see that the distribution of objects across caches is unbalanced, since the standard deviation as a percentage of the mean number of objects per cache (on the y-axis, also logarithmic) is high. As the number of replicas increases the distribution of objects becomes more balanced. This experiment shows that a figure of one or two hundred replicas achieves an acceptable balance (a standard deviation that is roughly between 5% and 10% of the mean).

    ch-graph.png

    Implementation

    For completeness here is a simple implementation in Java. In order for consistent hashing to be effective it is important to have a hash function that mixes well. Most implementations of Object's hashCode do not mix well - for example, they typically produce a restricted number of small integer values - so we have a HashFunction interface to allow a custom hash function to be used. MD5 hashes are recommended here.

    import java.util.Collection;
    import java.util.SortedMap;
    import java.util.TreeMap;

    publicclassConsistentHash<T>{

    privatefinalHashFunction hashFunction;
    privatefinalint numberOfReplicas;
    privatefinalSortedMap<Integer, T> circle =newTreeMap<Integer, T>();

    publicConsistentHash(HashFunction hashFunction,int numberOfReplicas,
    Collection<T> nodes){
    this.hashFunction = hashFunction;
    this.numberOfReplicas = numberOfReplicas;

    for(T node : nodes){
    add(node);
    }
    }

    publicvoid add(T node){
    for(int i =0; i < numberOfReplicas; i++){
    circle.put(hashFunction.hash(node.toString()+ i), node);
    }
    }

    publicvoid remove(T node){
    for(int i =0; i < numberOfReplicas; i++){
    circle.remove(hashFunction.hash(node.toString()+ i));
    }
    }

    public T get(Object key){
    if(circle.isEmpty()){
    returnnull;
    }
    int hash = hashFunction.hash(key);
    if(!circle.containsKey(hash)){
    SortedMap<Integer, T> tailMap = circle.tailMap(hash);
    hash = tailMap.isEmpty()? circle.firstKey(): tailMap.firstKey();
    }
    return circle.get(hash);
    }

    }

    The circle is represented as a sorted map of integers, which represent the hash values, to caches (of type T here).
    When a ConsistentHash object is created each node is added to the circle map a number of times (controlled by numberOfReplicas). The location of each replica is chosen by hashing the node's name along with a numerical suffix, and the node is stored at each of these points in the map.

    To find a node for an object (the get method), the hash value of the object is used to look in the map. Most of the time there will not be a node stored at this hash value (since the hash value space is typically much larger than the number of nodes, even with replicas), so the next node is found by looking for the first key in the tail map. If the tail map is empty then we wrap around the circle by getting the first key in the circle.

    Usage

    So how can you use consistent hashing? You are most likely to meet it in a library, rather than having to code it yourself. For example, as mentioned above, memcached, a distributed memory object caching system, now has clients that support consistent hashing. Last.fm's ketama by Richard Jones was the first, and there is now a Java implementation by Dustin Sallings (which inspired my simplified demonstration implementation above). It is interesting to note that it is only the client that needs to implement the consistent hashing algorithm - the memcached server is unchanged. Other systems that employ consistent hashing include Chord, which is a distributed hash table implementation, and Amazon's Dynamo, which is a key-value store (not available outside Amazon).

     
     
  • 相关阅读:
    [转]Ubuntu设置Redhat风格的SHELL提示符PS1属性
    [转]Ubuntu Adsl 上网
    [转]Bash中的PS1详解
    Verilog 关于用task仿真应注意的一个问题
    [转]提高编程技能最有效的方法
    [转]ubuntu 终端常用命令
    [转]VMware Workstation 7.1 正式版 For Linux
    [转]Vim基本操作
    [转]Ubuntu Linux下设置IP的配置命令
    xilinxftp.newlocation
  • 原文地址:https://www.cnblogs.com/killbug/p/3232126.html
Copyright © 2011-2022 走看看