zoukankan      html  css  js  c++  java
  • Java 集合:(二十一) HashMap 源码剖析(JDK8)

    一、HashMap(JDK8)中成员变量与方法列表

      1、成员变量

        

      1     (1)标识该类的 序列化唯一ID
      2     private static final long serialVersionUID = 362498820763181265L;
      3     
      4     (2)默认初始化容量 16(必须为2的次幂)
      5     /**
      6      * The default initial capacity - MUST be a power of two.
      7      */
      8     static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
      9 
     10     (3)最大容量  1<<30 == 2^30
     11     /**
     12      * The maximum capacity, used if a higher value is implicitly specified
     13      * by either of the constructors with arguments.
     14      * MUST be a power of two <= 1<<30.
     15      */
     16     static final int MAXIMUM_CAPACITY = 1 << 30;
     17 
     18 
     19     (4)默认负载因子  0.75 (在构造函数中未指定时使用的负载系数)
     20     /**
     21      * The load factor used when none specified in constructor.
     22      */
     23     static final float DEFAULT_LOAD_FACTOR = 0.75f;
     24 
     25     (5)树化阈值
     26     /**
     27      * The bin count threshold for using a tree rather than list for a
     28      * bin.  Bins are converted to trees when adding an element to a
     29      * bin with at least this many nodes. The value must be greater
     30      * than 2 and should be at least 8 to mesh with assumptions in
     31      * tree removal about conversion back to plain bins upon
     32      * shrinkage.
     33      */
     34     static final int TREEIFY_THRESHOLD = 8;
     35 
     36     (6)取消树化阈值
     37     /**
     38      * The bin count threshold for untreeifying a (split) bin during a
     39      * resize operation. Should be less than TREEIFY_THRESHOLD, and at
     40      * most 6 to mesh with shrinkage detection under removal.
     41      */
     42     static final int UNTREEIFY_THRESHOLD = 6;
     43 
     44     (7)最小树化容量
     45     /**
     46      * The smallest table capacity for which bins may be treeified.
     47      * (Otherwise the table is resized if too many nodes in a bin.)
     48      * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
     49      * between resizing and treeification thresholds.
     50      */
     51     static final int MIN_TREEIFY_CAPACITY = 64;
     52     
     53     (8)存储元素的实体数组
     54     /**
     55      * The table, initialized on first use, and resized as
     56      * necessary. When allocated, length is always a power of two.
     57      * (We also tolerate length zero in some operations to allow
     58      * bootstrapping mechanics that are currently not needed.)
     59      */
     60     transient Node<K,V>[] table;
     61 
     62     (9)对整个HashMap的映射视图
     63     /**
     64      * Holds cached entrySet(). Note that AbstractMap fields are used
     65      * for keySet() and values().
     66      */
     67     transient Set<Map.Entry<K,V>> entrySet;
     68 
     69     (10)存放元素的个数
     70     /**
     71      * The number of key-value mappings contained in this map.
     72      */
     73     transient int size;
     74 
     75     (11)记录对该 HashMap 进行结构修改的次数,用于快速失败(fail-fast)
     76     /**
     77      * The number of times this HashMap has been structurally modified
     78      * Structural modifications are those that change the number of mappings in
     79      * the HashMap or otherwise modify its internal structure (e.g.,
     80      * rehash).  This field is used to make iterators on Collection-views of
     81      * the HashMap fail-fast.  (See ConcurrentModificationException).
     82      */
     83     transient int modCount;
     84 
     85     (12)扩容的临界值,当数组的元素达到该值时,考虑扩容(当实际大小超过临界值时,会进行扩容threshold = 加载因子*容量)
     86     /**
     87      * The next size value at which to resize (capacity * load factor).
     88      *
     89      * @serial
     90      */
     91     // (The javadoc description is true upon serialization.
     92     // Additionally, if the table array has not been allocated, this
     93     // field holds the initial array capacity, or zero signifying
     94     // DEFAULT_INITIAL_CAPACITY.)
     95     int threshold;
     96 
     97     (13)负载因子(加载因子)
     98     /**
     99      * The load factor for the hash table.
    100      *
    101      * @serial
    102      */
    103     final float loadFactor;

      2、方法列表

        

    二、HashMap 的构造器

      HashMap 提供了四个构造器,可以分为两类,下面进行学习:

      1、无参或指定容量和加载因子

     1     
     2     (1)无参构造,容量和加载因子都使用默认值
     3     /**
     4      * Constructs an empty <tt>HashMap</tt> with the default initial capacity
     5      * (16) and the default load factor (0.75).
     6      */
     7     public HashMap() {
     8         this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
     9     }
    10     
    11     (2)指定容量
    12     /**
    13      * Constructs an empty <tt>HashMap</tt> with the specified initial
    14      * capacity and the default load factor (0.75).
    15      *
    16      * @param  initialCapacity the initial capacity.
    17      * @throws IllegalArgumentException if the initial capacity is negative.
    18      */
    19     public HashMap(int initialCapacity) {
    20         this(initialCapacity, DEFAULT_LOAD_FACTOR);
    21     }
    22     
    23     (3)指定容量和加载因子
    24     /**
    25      * Constructs an empty <tt>HashMap</tt> with the specified initial
    26      * capacity and load factor.
    27      *
    28      * @param  initialCapacity the initial capacity
    29      * @param  loadFactor      the load factor
    30      * @throws IllegalArgumentException if the initial capacity is negative
    31      *         or the load factor is nonpositive
    32      */
    33     public HashMap(int initialCapacity, float loadFactor) {
    34         if (initialCapacity < 0)
    35             throw new IllegalArgumentException("Illegal initial capacity: " +
    36                                                initialCapacity);
    37         if (initialCapacity > MAXIMUM_CAPACITY)
    38             initialCapacity = MAXIMUM_CAPACITY;
    39         if (loadFactor <= 0 || Float.isNaN(loadFactor))
    40             throw new IllegalArgumentException("Illegal load factor: " +
    41                                                loadFactor);
    42         this.loadFactor = loadFactor;
    43         this.threshold = tableSizeFor(initialCapacity);
    44     }
    45     
    46     

        其中第三个构造方法中有两个参数,第一个initialCapacity定义map的数组大小,第二个loadFactor意为负载因子,他的作用就是当容器中存储的数据达到loadFactor限度以后,就开始扩容。如果不设定这样参数的话,loadFactor就等于默认值0.75。

        但是细心的你会发现,容器创建以后,并没有创建数组,原来table是在第一次被使用的时候才创建的,而这个时候threshold = initialCapacity * loadFactor。 这才是这个容器的真正的负载能力。

        tableSizeFor这个方法的目的是找到大于或等于initialCapacity的最小的2的幂,这个算法写的非常妙,值得我们细细品

        HashMap的数组大小是有讲究的,他必须是2的幂,这里通过一个厉害的位运算算法,找到大于或等于initialCapacity的最小的2的幂:

     1     /**
     2      * Returns a power of two size for the given target capacity.
     3      */
     4     static final int tableSizeFor(int cap) {
     5         int n = cap - 1;
     6         n |= n >>> 1;
     7         n |= n >>> 2;
     8         n |= n >>> 4;
     9         n |= n >>> 8;
    10         n |= n >>> 16;
    11         return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    12     }

        这里暂不做过多的介绍,后面的文章会做详细的讲解。

      2、传入一个Map的构造方法

     1     /**
     2      * Constructs a new <tt>HashMap</tt> with the same mappings as the
     3      * specified <tt>Map</tt>.  The <tt>HashMap</tt> is created with
     4      * default load factor (0.75) and an initial capacity sufficient to
     5      * hold the mappings in the specified <tt>Map</tt>.
     6      *
     7      * @param   m the map whose mappings are to be placed in this map
     8      * @throws  NullPointerException if the specified map is null
     9      */
    10     public HashMap(Map<? extends K, ? extends V> m) {
    11         this.loadFactor = DEFAULT_LOAD_FACTOR;
    12         putMapEntries(m, false);
    13     }

        可以看到这里核心的操作都在 putMapEntries() 这个方法里面,下面的添加过程再做详解。 

    三、HashMap 中的节点

      在 JDK7中,HashMap 基于数组+链表的方式,直有链表节点,到 JDK8中,基于 HashMap+链表/红黑树实现,所有除了链表节点和有对应的树形节点。

      1、链表节点

     1   /**
     2      * Basic hash bin node, used for most entries.  (See below for
     3      * TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
     4      */
     5     static class Node<K,V> implements Map.Entry<K,V> {
     6         final int hash;
     7         final K key;
     8         V value;
     9         Node<K,V> next;
    10 
    11         Node(int hash, K key, V value, Node<K,V> next) {
    12             this.hash = hash;
    13             this.key = key;
    14             this.value = value;
    15             this.next = next;
    16         }
    17 
    18         public final K getKey()        { return key; }
    19         public final V getValue()      { return value; }
    20         public final String toString() { return key + "=" + value; }
    21 
    22         public final int hashCode() {
    23             return Objects.hashCode(key) ^ Objects.hashCode(value);
    24         }
    25 
    26         public final V setValue(V newValue) {
    27             V oldValue = value;
    28             value = newValue;
    29             return oldValue;
    30         }
    31 
    32         public final boolean equals(Object o) {
    33             if (o == this)
    34                 return true;
    35             if (o instanceof Map.Entry) {
    36                 Map.Entry<?,?> e = (Map.Entry<?,?>)o;
    37                 if (Objects.equals(key, e.getKey()) &&
    38                     Objects.equals(value, e.getValue()))
    39                     return true;
    40             }
    41             return false;
    42         }
    43     }

        在 JDK8 中,由JDK7 中的 Entry 节点变成了 Node 节点,但是类内部的结构并没有改变,只是名字改变。

        在上面的注释信息中可以看到,这个类型的节点类型是用于基本的Hash节点,可用于大部分的映射关系(Map),同时树形节点(TreeNode)和LinkedHashMap 中的节点都是该类的子类。

      2、树型节点

        TreeNode 是HashMap 中维护的红黑树节点,里面有好多的方法(涉及旋转,变色的操作,这里不再分析),这里只看成员变量和构造器,可以看到树结点继承了LinkedHashMap 中的节点类型,LinkedHashMap 中节点维护了前后两个引用,TreeNode 在此基础之上,又维护了父节点,左右子节点和一个前节点。

     1     LinkedHashMap 中的节点
     2     /**
     3      * HashMap.Node subclass for normal LinkedHashMap entries.
     4      */
     5     static class Entry<K,V> extends HashMap.Node<K,V> {
     6         Entry<K,V> before, after;
     7         Entry(int hash, K key, V value, Node<K,V> next) {
     8             super(hash, key, value, next);
     9         }
    10     }
    11     
    12     TreeNode节点
    13     /**
    14      * Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn
    15      * extends Node) so can be used as extension of either regular or
    16      * linked node.
    17      */
    18     static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
    19         TreeNode<K,V> parent;  // red-black tree links
    20         TreeNode<K,V> left;
    21         TreeNode<K,V> right;
    22         TreeNode<K,V> prev;    // needed to unlink next upon deletion
    23         boolean red;
    24         TreeNode(int hash, K key, V val, Node<K,V> next) {
    25             super(hash, key, val, next);
    26         }
    27     }

        TreeNode 方法列表:

        

      3、

    四、HashMap 中 table 的分配

      HashMap 在构造器中并没有为 table 数组分配,此时只是确定了数组table的容量,等到第一次向 HashMap中添加元素时,才会根据容量分配空间。

      在构造器中,会调用 tableSizeFor(int cap)来确保数组的容量是大于或等于initialCapacity的最小的2的幂:

     1     /**
     2      * Returns a power of two size for the given target capacity.
     3      */
     4     static final int tableSizeFor(int cap) {
     5         int n = cap - 1;
     6         n |= n >>> 1;
     7         n |= n >>> 2;
     8         n |= n >>> 4;
     9         n |= n >>> 8;
    10         n |= n >>> 16;
    11         return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    12     }

      

      下面看一下 put() 方法简要代码

     1 final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
     2                boolean evict) {
     3     Node<K,V>[] tab; Node<K,V> p; int n, i;
     4     if ((tab = table) == null || (n = tab.length) == 0)
     5         n = (tab = resize()).length;
     6     if ((p = tab[i = (n - 1) & hash]) == null)
     7         tab[i] = newNode(hash, key, value, null);
     8     else {...}
     9     ++modCount;
    10     if (++size > threshold)
    11         resize();
    12     afterNodeInsertion(evict);
    13     return null;
    14 }

      可以看到当第一次执行 put() 方法时,table进行为null判断,当 table 为null时,会执行 resize() 方法进行 table 的分配内存。

      同时可以看到第 10 行,当 size>threshold 时,达到临界点时,会执行扩容方法 resize() 方法,可以得到 table 的分配和扩容都在 resize() 方法中。

    五、hash 函数

      HashMap 在JDK8中对 Hash 函数也做了优化:

     1     /**
     2      * Computes key.hashCode() and spreads (XORs) higher bits of hash
     3      * to lower.  Because the table uses power-of-two masking, sets of
     4      * hashes that vary only in bits above the current mask will
     5      * always collide. (Among known examples are sets of Float keys
     6      * holding consecutive whole numbers in small tables.)  So we
     7      * apply a transform that spreads the impact of higher bits
     8      * downward. There is a tradeoff between speed, utility, and
     9      * quality of bit-spreading. Because many common sets of hashes
    10      * are already reasonably distributed (so don't benefit from
    11      * spreading), and because we use trees to handle large sets of
    12      * collisions in bins, we just XOR some shifted bits in the
    13      * cheapest possible way to reduce systematic lossage, as well as
    14      * to incorporate impact of the highest bits that would otherwise
    15      * never be used in index calculations because of table bounds.
    16      */
    17     static final int hash(Object key) {
    18         int h;
    19         return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
    20     }

      上面的大概意思是:

    计算key.hashCode()并将散列的(XOR)较高的位散布到较低的位。由于该表使用2的幂次掩码,因此仅在当前掩码上方的位中发生变化的哈希集将始终发生冲突。 (众所周知的示例是在小表中包含连续整数的Float键集。)因此,我们应用了将向下扩展较高位的影响的变换。在速度,实用性和位扩展质量之间需要权衡。由于许多常见的哈希集已经合理分布(因此无法从扩展中受益),并且由于我们使用树来处理容器中的大量冲突,因此我们仅以最便宜的方式对一些移位后的位进行XOR,以减少系统损失,以及合并最高位的影响,否则由于表范围的限制,这些位将永远不会在索引计算中使用。

      相对于 JDK7 的四次位运算来说,JDK8只用了一次位运算,性能效率大大提升了,也是相较于JDK7 的一个较大的改动。

    六、计算元素在数组的下标

      在JDK7 中有一个专门的方法 indexFor() 根据hash值和 table的长度来计算元素的下标,在JDK8 中并没有发现这样的方法,但还是用同样的方法来计算元素的下标的:

     1 final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
     2                boolean evict) {
     3     Node<K,V>[] tab; Node<K,V> p; int n, i;
     4     if ((tab = table) == null || (n = tab.length) == 0)
     5         n = (tab = resize()).length;
     6     if ((p = tab[i = (n - 1) & hash]) == null)
     7         tab[i] = newNode(hash, key, value, null);
     8     else {...}
     9     ++modCount;
    10     if (++size > threshold)
    11         resize();
    12     afterNodeInsertion(evict);
    13     return null;
    14 }

      可以看到JDK8 中计算下标的方式是用 (table的长度 - 1) &hash 值来判断的,这一点与 JDK7 是无异的。

    七、添加元素 put() 系列

    八、查找元素 get() 系列

    九、删除元素 remove() 系列

    十、resize动态扩容

    十一、克隆与序列化

      1、克隆方法

     1     @Override
     2     public Object clone() {
     3         HashMap<K,V> result;
     4         try {
     5             result = (HashMap<K,V>)super.clone();
     6         } catch (CloneNotSupportedException e) {
     7             // this shouldn't happen, since we are Cloneable
     8             throw new InternalError(e);
     9         }
    10         result.reinitialize();
    11         result.putMapEntries(this, false);
    12         return result;
    13     }

      2、序列化与反序列化

     1     private void writeObject(java.io.ObjectOutputStream s)
     2         throws IOException {
     3         int buckets = capacity();
     4         // Write out the threshold, loadfactor, and any hidden stuff
     5         s.defaultWriteObject();
     6         s.writeInt(buckets);
     7         s.writeInt(size);
     8         internalWriteEntries(s);
     9     }
    10     // Called only from writeObject, to ensure compatible ordering.
    11     void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {
    12         Node<K,V>[] tab;
    13         if (size > 0 && (tab = table) != null) {
    14             for (int i = 0; i < tab.length; ++i) {
    15                 for (Node<K,V> e = tab[i]; e != null; e = e.next) {
    16                     s.writeObject(e.key);
    17                     s.writeObject(e.value);
    18                 }
    19             }
    20         }
    21     }
    22 
    23     private void readObject(java.io.ObjectInputStream s)
    24         throws IOException, ClassNotFoundException {
    25         // Read in the threshold (ignored), loadfactor, and any hidden stuff
    26         s.defaultReadObject();
    27         reinitialize();
    28         if (loadFactor <= 0 || Float.isNaN(loadFactor))
    29             throw new InvalidObjectException("Illegal load factor: " +
    30                                              loadFactor);
    31         s.readInt();                // Read and ignore number of buckets
    32         int mappings = s.readInt(); // Read number of mappings (size)
    33         if (mappings < 0)
    34             throw new InvalidObjectException("Illegal mappings count: " +
    35                                              mappings);
    36         else if (mappings > 0) { // (if zero, use defaults)
    37             // Size the table using given load factor only if within
    38             // range of 0.25...4.0
    39             float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
    40             float fc = (float)mappings / lf + 1.0f;
    41             int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?
    42                        DEFAULT_INITIAL_CAPACITY :
    43                        (fc >= MAXIMUM_CAPACITY) ?
    44                        MAXIMUM_CAPACITY :
    45                        tableSizeFor((int)fc));
    46             float ft = (float)cap * lf;
    47             threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?
    48                          (int)ft : Integer.MAX_VALUE);
    49 
    50             // Check Map.Entry[].class since it's the nearest public type to
    51             // what we're actually creating.
    52             SharedSecrets.getJavaOISAccess().checkArray(s, Map.Entry[].class, cap);
    53             @SuppressWarnings({"rawtypes","unchecked"})
    54             Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
    55             table = tab;
    56 
    57             // Read the keys and values, and put the mappings in the HashMap
    58             for (int i = 0; i < mappings; i++) {
    59                 @SuppressWarnings("unchecked")
    60                     K key = (K) s.readObject();
    61                 @SuppressWarnings("unchecked")
    62                     V value = (V) s.readObject();
    63                 putVal(hash(key), key, value, false, false);
    64             }
    65         }
    66     }

    十二、遍历

     

    十三、其他方法

    十四、JDK8 新增方法

        @Override
        public V getOrDefault(Object key, V defaultValue) {
            Node<K,V> e;
            return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
        }
    
        @Override
        public V putIfAbsent(K key, V value) {
            return putVal(hash(key), key, value, true, true);
        }
    
        @Override
        public boolean remove(Object key, Object value) {
            return removeNode(hash(key), key, value, true, true) != null;
        }
    
        @Override
        public boolean replace(K key, V oldValue, V newValue) {
            Node<K,V> e; V v;
            if ((e = getNode(hash(key), key)) != null &&
                ((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {
                e.value = newValue;
                afterNodeAccess(e);
                return true;
            }
            return false;
        }
    
        @Override
        public V replace(K key, V value) {
            Node<K,V> e;
            if ((e = getNode(hash(key), key)) != null) {
                V oldValue = e.value;
                e.value = value;
                afterNodeAccess(e);
                return oldValue;
            }
            return null;
        }
    
        @Override
        public V computeIfAbsent(K key,
                                 Function<? super K, ? extends V> mappingFunction) {
            if (mappingFunction == null)
                throw new NullPointerException();
            int hash = hash(key);
            Node<K,V>[] tab; Node<K,V> first; int n, i;
            int binCount = 0;
            TreeNode<K,V> t = null;
            Node<K,V> old = null;
            if (size > threshold || (tab = table) == null ||
                (n = tab.length) == 0)
                n = (tab = resize()).length;
            if ((first = tab[i = (n - 1) & hash]) != null) {
                if (first instanceof TreeNode)
                    old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
                else {
                    Node<K,V> e = first; K k;
                    do {
                        if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k)))) {
                            old = e;
                            break;
                        }
                        ++binCount;
                    } while ((e = e.next) != null);
                }
                V oldValue;
                if (old != null && (oldValue = old.value) != null) {
                    afterNodeAccess(old);
                    return oldValue;
                }
            }
            V v = mappingFunction.apply(key);
            if (v == null) {
                return null;
            } else if (old != null) {
                old.value = v;
                afterNodeAccess(old);
                return v;
            }
            else if (t != null)
                t.putTreeVal(this, tab, hash, key, v);
            else {
                tab[i] = newNode(hash, key, v, first);
                if (binCount >= TREEIFY_THRESHOLD - 1)
                    treeifyBin(tab, hash);
            }
            ++modCount;
            ++size;
            afterNodeInsertion(true);
            return v;
        }
    
        public V computeIfPresent(K key,
                                  BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
            if (remappingFunction == null)
                throw new NullPointerException();
            Node<K,V> e; V oldValue;
            int hash = hash(key);
            if ((e = getNode(hash, key)) != null &&
                (oldValue = e.value) != null) {
                V v = remappingFunction.apply(key, oldValue);
                if (v != null) {
                    e.value = v;
                    afterNodeAccess(e);
                    return v;
                }
                else
                    removeNode(hash, key, null, false, true);
            }
            return null;
        }
    
        @Override
        public V compute(K key,
                         BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
            if (remappingFunction == null)
                throw new NullPointerException();
            int hash = hash(key);
            Node<K,V>[] tab; Node<K,V> first; int n, i;
            int binCount = 0;
            TreeNode<K,V> t = null;
            Node<K,V> old = null;
            if (size > threshold || (tab = table) == null ||
                (n = tab.length) == 0)
                n = (tab = resize()).length;
            if ((first = tab[i = (n - 1) & hash]) != null) {
                if (first instanceof TreeNode)
                    old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
                else {
                    Node<K,V> e = first; K k;
                    do {
                        if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k)))) {
                            old = e;
                            break;
                        }
                        ++binCount;
                    } while ((e = e.next) != null);
                }
            }
            V oldValue = (old == null) ? null : old.value;
            V v = remappingFunction.apply(key, oldValue);
            if (old != null) {
                if (v != null) {
                    old.value = v;
                    afterNodeAccess(old);
                }
                else
                    removeNode(hash, key, null, false, true);
            }
            else if (v != null) {
                if (t != null)
                    t.putTreeVal(this, tab, hash, key, v);
                else {
                    tab[i] = newNode(hash, key, v, first);
                    if (binCount >= TREEIFY_THRESHOLD - 1)
                        treeifyBin(tab, hash);
                }
                ++modCount;
                ++size;
                afterNodeInsertion(true);
            }
            return v;
        }
    
        @Override
        public V merge(K key, V value,
                       BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
            if (value == null)
                throw new NullPointerException();
            if (remappingFunction == null)
                throw new NullPointerException();
            int hash = hash(key);
            Node<K,V>[] tab; Node<K,V> first; int n, i;
            int binCount = 0;
            TreeNode<K,V> t = null;
            Node<K,V> old = null;
            if (size > threshold || (tab = table) == null ||
                (n = tab.length) == 0)
                n = (tab = resize()).length;
            if ((first = tab[i = (n - 1) & hash]) != null) {
                if (first instanceof TreeNode)
                    old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
                else {
                    Node<K,V> e = first; K k;
                    do {
                        if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k)))) {
                            old = e;
                            break;
                        }
                        ++binCount;
                    } while ((e = e.next) != null);
                }
            }
            if (old != null) {
                V v;
                if (old.value != null)
                    v = remappingFunction.apply(old.value, value);
                else
                    v = value;
                if (v != null) {
                    old.value = v;
                    afterNodeAccess(old);
                }
                else
                    removeNode(hash, key, null, false, true);
                return v;
            }
            if (value != null) {
                if (t != null)
                    t.putTreeVal(this, tab, hash, key, value);
                else {
                    tab[i] = newNode(hash, key, value, first);
                    if (binCount >= TREEIFY_THRESHOLD - 1)
                        treeifyBin(tab, hash);
                }
                ++modCount;
                ++size;
                afterNodeInsertion(true);
            }
            return value;
        }
    
        @Override
        public void forEach(BiConsumer<? super K, ? super V> action) {
            Node<K,V>[] tab;
            if (action == null)
                throw new NullPointerException();
            if (size > 0 && (tab = table) != null) {
                int mc = modCount;
                for (int i = 0; i < tab.length; ++i) {
                    for (Node<K,V> e = tab[i]; e != null; e = e.next)
                        action.accept(e.key, e.value);
                }
                if (modCount != mc)
                    throw new ConcurrentModificationException();
            }
        }
    
        @Override
        public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
            Node<K,V>[] tab;
            if (function == null)
                throw new NullPointerException();
            if (size > 0 && (tab = table) != null) {
                int mc = modCount;
                for (int i = 0; i < tab.length; ++i) {
                    for (Node<K,V> e = tab[i]; e != null; e = e.next) {
                        e.value = function.apply(e.key, e.value);
                    }
                }
                if (modCount != mc)
                    throw new ConcurrentModificationException();
            }
        }

    十五、

    本篇文章基于 JDK8(jdk1.8.0_291)进行剖析学习。

  • 相关阅读:
    SQL Server 连接字符串和身份验证
    常用jQuery选择器总结【转】
    javascript深入理解js闭包[转]
    JS鼠标事件大全
    JS 获取各个宽度和高度
    移动设备屏幕缩放
    面向对象学习【类-匿名类】
    Java学习笔记之log4j与commons-logging<转>
    Java数据库连接——JDBC基础知识(操作数据库:增删改查)【转】
    静态方法和非静态方法的区别
  • 原文地址:https://www.cnblogs.com/niujifei/p/14750620.html
Copyright © 2011-2022 走看看