zoukankan      html  css  js  c++  java
  • java并发初探ConcurrentHashMap

    java并发初探ConcurrentHashMap

    Doug Lea在java并发上创造了不可磨灭的功劳,ConcurrentHashMap体现这位大师的非凡能力。

    1.8中ConcurrentHashMap的线程安全

    1.volatile Node<k,v> []table保证数组的可见性
    2.get操作没有加锁
    3.put操作调用final V putVal(K key, V value, boolean onlyIfAbsent) ,在方法内部为Syncronized方法加锁,Syncronized据说在1.8得到优化
    4.扩容的方法不是Syncronized,而在数据迁移的时候通过Syncronized迁移数据
    5.多线程putVal(K key, V value, boolean onlyIfAbsent),通过helpTransfer帮助数组扩容(binCount),然后继续添加元素 (binCount!=0跳出)

                else if ((fh = f.hash) == MOVED)
                    tab = helpTransfer(tab, f);
    

    6.多线程下扩容,因为扩容是多线程共同进行,而且锁住了首节点,能够快速扩容

                else if ((f = tabAt(tab, i)) == null)//锁住节点
                    advance = casTabAt(tab, i, null, fwd);
                else if ((fh = f.hash) == MOVED)
                    advance = true; // already processed
                else {
                //执行该节点的扩容数据移动
                    synchronized (f) {
    

    7.ConcurrentHashMap取消了Segment分段锁,采用CAS和synchronized来保证并发安全。数据结构跟HashMap1.8的结构类似,数组+链表/红黑二叉树。Java 8在链表长度超过一定阈值(8)时将链表(寻址时间复杂度为O(N))转换为红黑树(寻址时间复杂度为O(log(N)))
    synchronized只锁定当前链表或红黑二叉树的首节点,这样只要hash不冲突,就不会产生并发,效率又提升N倍。

    例子

    package com.java.javabase.thread.collection;
    
    import com.java.javabase.innerclass.DoThis;
    import lombok.extern.slf4j.Slf4j;
    
    import java.util.HashMap;
    import java.util.Iterator;
    import java.util.Map;
    import java.util.Set;
    import java.util.concurrent.ConcurrentHashMap;
    
    /**
     * @author
     */
    @Slf4j
    public class ConcurrentHashMapTest {
        public static int cap = 5;
        public static ConcurrentHashMap<Integer, String> map = new ConcurrentHashMap<>();
        //public static HashMap<Integer, String> map = new HashMap<>();//会出现ConcurrentModificationException
        public static void main(String[] args) {
            InnerThread t1= new InnerThread("t1");
            InnerThread t2= new InnerThread("t2");
            t1.start();
            t2.start();
            //printAll(map);
        }
        static class InnerThread extends Thread {
            public InnerThread(String name)
            {
                super(name);
            }
            @Override
            public void run() {
                for (int i = 0; i < cap; i++) {
                    try {
                        map.put(i, String.valueOf(i));
                        Thread.sleep(1000);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
    
                    printAll(map);
                }
            }
        }
    
        static void printAll(Map<Integer, String> map) {
            //Set<Map.Entry<Integer, String>> entrySet=map.entrySet();
            Set entrySet = map.entrySet();
            Iterator<Map.Entry<Integer, String>> it = entrySet.iterator();
            while (it.hasNext()) {
                Map.Entry entry = it.next();
                log.info("thread {}: ,key {} value {}", Thread.currentThread().getName(),
                        entry.getKey(), entry.getValue());
            }
        }
    }
    
    
  • 相关阅读:
    MongoDB性能分析
    MongoDB复制
    redis键管理
    MySQL集群架构-DRBD+headbeat +lvs+keepalived
    Spark-Core RDD转换算子-双Value型交互
    Spark-Core RDD转换算子-Value型
    Spark-Core RDD的创建
    Spark-Core RDD概述
    数仓理论
    flume 进阶
  • 原文地址:https://www.cnblogs.com/JuncaiF/p/11396255.html
Copyright © 2011-2022 走看看