zoukankan      html  css  js  c++  java
  • memcached空指针内存错误与死循环问题分析(memcached dead loop and crash bug! issue #260 and issue #370)

     (由于这是发在memcached邮件列表的,所以只能用一下蹩脚的英文了)

    (you should read the discuss about issue #260 first:  https://github.com/memcached/memcached/pull/67)

      Thanks for dormando to pull me back to the do_item_alloc function, and i also read again carefully about how to reproduce and fix the issue #260. It is awful  to find the extremely path to reproduce the but. But i m afraid there MAY have other problems, even at vervion .20, more higher problibility.  

    reproduce step by step in v 1.4.20:

    1) hash expanding, switch the item_lock to global_lock;
    2) In thread A, do_item_alloc try to get a new item, and try to get a ITEM from the LRU tailer. but it does not lock the ITEM, because it try to hold the item_locks[];
    3) after do_item_alloc check the refcount(after line items.c:139), other thread B try to hold the ITEM, and increase the refcount, and believe itself hold a refcount;
    4) In thread A, do_item_alloc evicted the ITEM, and initilize the ITEM(reset the refcount to 1),  as a new one returning to the caller;
    5) In thread B, it call item_remove to dereference the refcount, and make the refcount to 0, so the ITEM is free! (the thread A is holding the ITEM, and the ITME is in the hash table yet);
    6) So, any terrible thing may happen, including crash and dead loop.

    In version lower than 1.4.19, it easier to occurr even if there is no hash expanding.  Because the code in do do_item_alloc as following:

    1) if the cur_hv == cur_hv, (it is possible at replace operation);
    or
    2) if the first loop hv!=cur_hv, and hold a lock; then the second loop hv==cur_hv(without a lock, but the lock's pointer is not reset in the first loop!), then release the lock wrong.

        for (; tries > 0 && search != NULL; tries--, search=search->prev) {
            uint32_t hv = hash(ITEM_key(search), search->nkey, 0);
            /* Attempt to hash item lock the "search" item. If locked, no
             * other callers can incr the refcount
             */
            /* FIXME: I think we need to mask the hv here for comparison? */
            if (hv != cur_hv && (hold_lock = item_trylock(hv)) == NULL)   ------> 
                continue;
            /* Now see if the item is refcount locked */
            if (refcount_incr(&search->refcount) != 2) {
                refcount_decr(&search->refcount);
                /* Old rare bug could cause a refcount leak. We haven't seen
                 * it in years, but we leave this code in to prevent failures
                 * just in case */
                if (search->time + TAIL_REPAIR_TIME < current_time) {
                    itemstats[id].tailrepairs++;
                    search->refcount = 1;
                    do_item_unlink_nolock(search, hv);
                }
                if (hold_lock)
                    item_trylock_unlock(hold_lock);
                continue;
            }

    I think it's clear enough, but i also try to show more detail as following, and try to reproduce the phenomenon in gdb:

      I don't know why the function "assoc_maintenance_thread" writed as following, but other threads will get the global_lock before "assoc_maintenance_thread" get it again.

    static void *assoc_maintenance_thread(void *arg) {
    
        while (do_run_maintenance_thread) {
            int ii = 0;
    
            /* Lock the cache, and bulk move multiple buckets to the new
             * hash table. */
            item_lock_global();                   ----------------> step 5) try to get the global_lock, it have to wait for other threads to                                                   -------------release the lock.
            mutex_lock(&cache_lock);
    
            for (ii = 0; ii < hash_bulk_move && expanding; ++ii) {
               .....
            }
    
            mutex_unlock(&cache_lock);
            item_unlock_global();
    
            if (!expanding) {
                /* finished expanding. tell all threads to use fine-grained locks */
                switch_item_lock_type(ITEM_LOCK_GRANULAR);
                slabs_rebalancer_resume();
                /* We are done expanding.. just wait for next invocation */
                mutex_lock(&cache_lock);
                started_expanding = false;
                pthread_cond_wait(&maintenance_cond, &cache_lock);    ------------>step 1) wait here for expanding notify.
                /* Before doing anything, tell threads to use a global lock */
                mutex_unlock(&cache_lock);       
                slabs_rebalancer_pause();
                switch_item_lock_type(ITEM_LOCK_GLOBAL);   ----------->step 2) switch to the global_lock, without holding the global_lock. 
    ---- other thread will get the global_lock first. it is not thread-safe. mutex_lock(
    &cache_lock); assoc_expand(); ---->step 3) expand the hash size, but the items are not moved to the new buckets. mutex_unlock(&cache_lock); --->step 4) release the lock, MAY be not thread-safe. } } return NULL; }

       and in the function "do_item_alloc", using the function "item_trylock" to get the item's lock. please look carefully at "item_trylock", it directly access "item_locks", hoping a "no-op" as it said in the comment.  the item(search) is not lock at all.

         So, after the checker "if (refcount_incr(&search->refcount) != 2) ", other threads may hold the item and increase the refcounte, that will make

    item *do_item_alloc(char *key, const size_t nkey, const int flags,
                        const rel_time_t exptime, const int nbytes,
                        const uint32_t cur_hv) {
        uint8_t nsuffix;
        item *it = NULL;
        char suffix[40];
        size_t ntotal = item_make_header(nkey + 1, flags, nbytes, suffix, &nsuffix);
        if (settings.use_cas) {
            ntotal += sizeof(uint64_t);
        }
    
        unsigned int id = slabs_clsid(ntotal);
        if (id == 0)
            return 0;
    
        mutex_lock(&cache_lock);
        /* do a quick check if we have any expired items in the tail.. */
        int tries = 5;
        int tried_alloc = 0;
        item *search;
        void *hold_lock = NULL;
        rel_time_t oldest_live = settings.oldest_live;
    
        search = tails[id];
        /* We walk up *only* for locked items. Never searching for expired.
         * Waste of CPU for almost all deployments */
        for (; tries > 0 && search != NULL; tries--, search=search->prev) {
            if (search->nbytes == 0 && search->nkey == 0 && search->it_flags == 1) {
                /* We are a crawler, ignore it. */
                tries++;
                continue;
            }
            uint32_t hv = hash(ITEM_key(search), search->nkey);
            /* Attempt to hash item lock the "search" item. If locked, no
             * other callers can incr the refcount
             */
            /* Don't accidentally grab ourselves, or bail if we can't quicklock */
            if (hv == cur_hv || (hold_lock = item_trylock(hv)) == NULL)      ---------------> 1) item_trylock always get lock from item_locks[], not global_lock
                continue;
            /* Now see if the item is refcount locked */
            if (refcount_incr(&search->refcount) != 2) {                     ---------------> 2) now the serch->refcount==2, means only the lru-link reference the item.
                refcount_decr(&search->refcount);
                /* Old rare bug could cause a refcount leak. We haven't seen
                 * it in years, but we leave this code in to prevent failures
                 * just in case */
                if (settings.tail_repair_time &&
                        search->time + settings.tail_repair_time < current_time) {
                    itemstats[id].tailrepairs++;
                    search->refcount = 1;
                    do_item_unlink_nolock(search, hv);
                }
                if (hold_lock)
                    item_trylock_unlock(hold_lock);
                continue;
            }
    
            /* Expired or flushed */
            if ((search->exptime != 0 && search->exptime < current_time)   ------------------> 3) after this line, other thread may got hold the item and increase the refcount.
    ----------- ------- so if the item is alloc as a new item (refcount reset to 1), then the other thread(
    -------------------hold the item) call do_item_remove, it will free the "new" item(because the refcount -------------------is 0). this is very highly possible at the item evicted case.
    || (search->time <= oldest_live && oldest_live <= current_time)) { itemstats[id].reclaimed++; if ((search->it_flags & ITEM_FETCHED) == 0) { itemstats[id].expired_unfetched++; } it = search; slabs_adjust_mem_requested(it->slabs_clsid, ITEM_ntotal(it), ntotal); do_item_unlink_nolock(it, hv); /* Initialize the item block: */ it->slabs_clsid = 0; } else if ((it = slabs_alloc(ntotal, id)) == NULL) { tried_alloc = 1; if (settings.evict_to_free == 0) { itemstats[id].outofmemory++; } else { itemstats[id].evicted++; itemstats[id].evicted_time = current_time - search->time; if (search->exptime != 0) itemstats[id].evicted_nonzero++; if ((search->it_flags & ITEM_FETCHED) == 0) { itemstats[id].evicted_unfetched++; }
    /* Special case. When ITEM_LOCK_GLOBAL mode is enabled, this should become a
     * no-op, as it's only called from within the item lock if necessary.
     * However, we can't mix a no-op and threads which are still synchronizing to
     * GLOBAL. So instead we just always try to lock. When in GLOBAL mode this
     * turns into an effective no-op. Threads re-synchronize after the power level
     * switch so it should stay safe.
     */
    void *item_trylock(uint32_t hv) {
        pthread_mutex_t *lock = &item_locks[hv & hashmask(item_lock_hashpower)];
        if (pthread_mutex_trylock(lock) == 0) {
            return lock;
        }
        return NULL;
    } 
    

     gdb reproduce(partial):

    b assoc.c:211
    b item_get
    b items.c:158
    (gdb) info thread
      Id   Target Id         Frame 
      6    Thread 0x7ffff4d71700 (LWP 12647) "memcached" (running)
      5    Thread 0x7ffff5572700 (LWP 12646) "memcached" (running)
      4    Thread 0x7ffff5d73700 (LWP 12645) "memcached" (running)
    * 3    Thread 0x7ffff6574700 (LWP 12644) "memcached" (running)
      2    Thread 0x7ffff6d75700 (LWP 12643) "memcached" (running)
      1    Thread 0x7ffff7fe0740 (LWP 12642) "memcached" 0x00007ffff76ad9a3 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81

    ////step 1////add a new item, modify the value of hash_items to 100000, to making a hash expanding.
    (gdb) thread 3 [Switching to thread 3 (Thread 0x7ffff6574700 (LWP 12644))] #0 do_item_alloc (key=0x7ffff0026414 "e", nkey=1, flags=0, exptime=0, nbytes=3, cur_hv=0) at items.c:159 159 tried_alloc = 1; (gdb) c Continuing. Breakpoint 4, assoc_insert (it=0x7ffff7f35f70, hv=1187334374) at assoc.c:170 170 if (! expanding && hash_items > (hashsize(hashpower) * 3) / 2) { (gdb) 508 it = do_item_get(key, nkey, hv); p hash_items $4 = 1000000 Breakpoint 1, assoc_maintenance_thread (arg=0x0) at assoc.c:211 211 item_lock_global(); (gdb) info thread Id Target Id Frame 6 Thread 0x7ffff4d71700 (LWP 12647) "memcached" assoc_maintenance_thread (arg=0x0) at assoc.c:211 5 Thread 0x7ffff5572700 (LWP 12646) "memcached" (running) 4 Thread 0x7ffff5d73700 (LWP 12645) "memcached" (running) * 3 Thread 0x7ffff6574700 (LWP 12644) "memcached" (running) 2 Thread 0x7ffff6d75700 (LWP 12643) "memcached" (running) 1 Thread 0x7ffff7fe0740 (LWP 12642) "memcached" (running)
    /////delete all key in the cache.
    /////step 2/// add a item, key=
    "71912_yhd.serial.product.get_1.0_0", to the slot 2, as the first item now.
    ////step 3///get the key="71912_yhd.serial.product.get_1.0_0",

    ///step 4 //and try to add a new key="Y_ORDER_225426358_02" to the slot 2.
    Breakpoint 5, do_item_alloc (key=0x7ffff0026414 "71912_yhd.serial.product.get_1.0_0", nkey=34, flags=0, exptime=0, nbytes=3, cur_hv=0) at items.c:92
    92                        const uint32_t cur_hv) {
    (gdb) n
    
    .... Breakpoint 3, item_get (key=0x7fffe0026414 "71912_yhd.serial.product.get_1.0_0", nkey=34) at thread.c:506 506 hv = hash(key, nkey); (gdb) info thread Id Target Id Frame 6 Thread 0x7ffff4d71700 (LWP 12647) "memcached" assoc_maintenance_thread (arg=0x0) at assoc.c:211 5 Thread 0x7ffff5572700 (LWP 12646) "memcached" (running) 4 Thread 0x7ffff5d73700 (LWP 12645) "memcached" (running) 3 Thread 0x7ffff6574700 (LWP 12644) "memcached" do_item_alloc (key=0x7ffff0026414 "Y_ORDER_225426358_02", nkey=20, flags=0, exptime=0, nbytes=22, cur_hv=0) at items.c:147 * 2 Thread 0x7ffff6d75700 (LWP 12643) "memcached" item_get (key=0x7fffe0026414 "71912_yhd.serial.product.get_1.0_0", nkey=34) at thread.c:506 1 Thread 0x7ffff7fe0740 (LWP 12642) "memcached" (running) (gdb) n 507 item_lock(hv); (gdb) 508 it = do_item_get(key, nkey, hv); (gdb) 509 item_unlock(hv);


    (gdb) thread
    3 [Switching to thread 3 (Thread 0x7ffff6574700 (LWP 12644))] #0 do_item_alloc (key=0x7ffff0026414 "Y_ORDER_225426358_02", nkey=20, flags=0, exptime=0, nbytes=22, cur_hv=0) at items.c:147 147 if ((search->exptime != 0 && search->exptime < current_time) (gdb) p search->refcount $21 = 3 -------> now the refcount is wrong. (may be in the evicted process is better). (gdb) bt #0 do_item_alloc (key=0x7ffff0026414 "Y_ORDER_225426358_02", nkey=20, flags=0, exptime=0, nbytes=22, cur_hv=0) at items.c:147 #1 0x0000000000417c26 in item_alloc (key=0x7ffff0026414 "Y_ORDER_225426358_02", nkey=20, flags=0, exptime=0, nbytes=22) at thread.c:495 #2 0x000000000040ace9 in process_update_command (c=0x7ffff0026200, tokens=0x7ffff6573be0, ntokens=6, comm=2, handle_cas=false) at memcached.c:3084 #3 0x000000000040bde2 in process_command (c=0x7ffff0026200, command=0x7ffff0026410 "set") at memcached.c:3437 #4 0x000000000040cc47 in try_read_command (c=0x7ffff0026200) at memcached.c:3763 #5 0x000000000040d958 in drive_machine (c=0x7ffff0026200) at memcached.c:4108 #6 0x000000000040e4f7 in event_handler (fd=37, which=2, arg=0x7ffff0026200) at memcached.c:4353 #7 0x00007ffff7ba3f24 in event_base_loop () from /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5 #8 0x0000000000417926 in worker_libevent (arg=0x635ea0) at thread.c:386 #9 0x00007ffff7980182 in start_thread (arg=0x7ffff6574700) at pthread_create.c:312 #10 0x00007ffff76ad30d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 (gdb)
    Connected to 127.0.0.1.
    Escape character is '^]'.
    set a 0 0 1
    1
    STORED
    ^[[A^[[B
    ERROR
    set b  0 0 1
    2
    STORED
    get 71912_yhd.serial.product.get_1.0_0
    
    
    
    
    
    jason@gy:~$ telnet 127.0.0.1 11211
    Trying 127.0.0.1...
    Connected to 127.0.0.1.
    Escape character is '^]'.
    set c 0 0 1  
    3
    STORED
    set 71912_yhd.serial.product.get_1.0_0 0 0 1
    3
    STORED
    delete 71912_yhd.serial.product.get_1.0_0
    DELETED
    set 71912_yhd.serial.product.get_1.0_0 0 0 1
    3
    STORED
    delete Y_ORDER_225426358_02
    delete a
    delete b
    delete c
    NOT_FOUND
    DELETED
    NOT_FOUND
    NOT_FOUND
    set Y_ORDER_225426358_02 0 0 20
    aaaaaaaaaaaaaaaaaaaa

      I will try to write a test script to reproduce it, and try to commit a patch to fix it.

    ------------------------------------------------

    I have got a deaploop on memcached(v1.4.15, on centos 6.3 x86_64), it can't be reproduced. In our product environment, there are hundreds of memcached instances running, and this bug happend 3 times this years.  when the bug occurred, thouands of tcp connections keep in CLOSE_WAIT status and reached the maximum connection number, then clients cann't connect to the cache servers.  The SA have to restart the memcached instance to recover our business,  but the recent time i got the chance to create a core file.

    FYI:     before start, you can get the the memcached package(with a debug-info package)at: http://mirrors.htbindustries.org/CentOS/6/x86_64/,and you can get the core file at: http://pan.baidu.com/s/1kTFTRQf

    backtrack in gdb.

    Thread 8 (Thread 0x7fa8a1ee1700 (LWP 26053)):
    #0  0x0000003c9e2e7c73 in epoll_wait () from /lib64/libc.so.6
    #1  0x000000323cc12e4b in ?? () from /usr/lib64/libevent-1.4.so.2
    #2  0x000000323cc068c3 in event_base_loop () from /usr/lib64/libevent-1.4.so.2
    #3  0x0000000000406447 in main (argc=<value optimized out>, 
        argv=<value optimized out>) at memcached.c:5228
    
    Thread 7 (Thread 0x7fa89e021700 (LWP 26061)):
    #0  0x0000003c9e60b43c in pthread_cond_wait@@GLIBC_2.3.2 ()
       from /lib64/libpthread.so.0
    #1  0x000000000040cb33 in slab_rebalance_thread (arg=<value optimized out>)
        at slabs.c:764
    #2  0x0000003c9e607851 in start_thread () from /lib64/libpthread.so.0
    #3  0x0000003c9e2e767d in clone () from /lib64/libc.so.6
    
    Thread 6 (Thread 0x7fa89ea22700 (LWP 26060)):
    #0  0x0000003c9e2ab91d in nanosleep () from /lib64/libc.so.6
    #1  0x0000003c9e2ab790 in sleep () from /lib64/libc.so.6
    #2  0x000000000040d0ad in slab_maintenance_thread (arg=<value optimized out>)
        at slabs.c:728
    #3  0x0000003c9e607851 in start_thread () from /lib64/libpthread.so.0
    #4  0x0000003c9e2e767d in clone () from /lib64/libc.so.6
    ---Type <return> to continue, or q <return> to quit---
    
    Thread 5 (Thread 0x7fa89f423700 (LWP 26059)):
    #0  0x0000003c9e60b43c in pthread_cond_wait@@GLIBC_2.3.2 ()
       from /lib64/libpthread.so.0
    #1  0x000000000040fa8d in assoc_maintenance_thread (arg=<value optimized out>)
        at assoc.c:251
    #2  0x0000003c9e607851 in start_thread () from /lib64/libpthread.so.0
    #3  0x0000003c9e2e767d in clone () from /lib64/libc.so.6
    
    Thread 4 (Thread 0x7fa89fe24700 (LWP 26058)):
    #0  0x000000000040fd34 in assoc_find (key=<value optimized out>, 
        nkey=<value optimized out>, hv=<value optimized out>) at assoc.c:92
    #1  0x000000000040ef2e in do_item_get (key=0x7fa88008bd34 "B920818_0", 
        nkey=<value optimized out>, hv=3789230535) at items.c:523
    #2  0x0000000000411076 in item_get (key=0x7fa88008bd34 "B920818_0", nkey=9)
        at thread.c:499
    #3  0x000000000040731e in process_get_command (c=0x7fa88008bb30, 
        tokens=0x7fa89fe23bf0, ntokens=<value optimized out>, return_cas=false)
        at memcached.c:2725
    #4  0x0000000000409b63 in process_command (c=0x7fa88008bb30, 
        command=<value optimized out>) at memcached.c:3249
    #5  0x000000000040a7e2 in try_read_command (c=0x7fa88008bb30)
        at memcached.c:3504
    ---Type <return> to continue, or q <return> to quit---
    #6  0x000000000040b478 in drive_machine (fd=<value optimized out>, 
        which=<value optimized out>, arg=0x7fa88008bb30) at memcached.c:3824
    #7  event_handler (fd=<value optimized out>, which=<value optimized out>, 
        arg=0x7fa88008bb30) at memcached.c:4065
    #8  0x000000323cc06b44 in event_base_loop () from /usr/lib64/libevent-1.4.so.2
    #9  0x000000000041070d in worker_libevent (arg=0x1ad5d88) at thread.c:384
    #10 0x0000003c9e607851 in start_thread () from /lib64/libpthread.so.0
    #11 0x0000003c9e2e767d in clone () from /lib64/libc.so.6
    
    Thread 3 (Thread 0x7fa8a0825700 (LWP 26057)):
    #0  0x0000003c9e6094b3 in pthread_mutex_trylock () from /lib64/libpthread.so.0
    #1  0x0000000000410d10 in mutex_lock (hv=<value optimized out>)
        at memcached.h:493
    #2  item_lock (hv=<value optimized out>) at thread.c:127
    #3  0x0000000000411069 in item_get (
        key=0x7fa8985d8944 "1368_yhd.orders.get_1.0_visitDateList_0", nkey=39)
        at thread.c:498
    #4  0x000000000040731e in process_get_command (c=0x7fa89829b7e0, 
        tokens=0x7fa8a0824bf0, ntokens=<value optimized out>, return_cas=false)
        at memcached.c:2725
    #5  0x0000000000409b63 in process_command (c=0x7fa89829b7e0, 
        command=<value optimized out>) at memcached.c:3249
    #6  0x000000000040a7e2 in try_read_command (c=0x7fa89829b7e0)
    ---Type <return> to continue, or q <return> to quit---
        at memcached.c:3504
    #7  0x000000000040b478 in drive_machine (fd=<value optimized out>, 
        which=<value optimized out>, arg=0x7fa89829b7e0) at memcached.c:3824
    #8  event_handler (fd=<value optimized out>, which=<value optimized out>, 
        arg=0x7fa89829b7e0) at memcached.c:4065
    #9  0x000000323cc06b44 in event_base_loop () from /usr/lib64/libevent-1.4.so.2
    #10 0x000000000041070d in worker_libevent (arg=0x1ad2a00) at thread.c:384
    #11 0x0000003c9e607851 in start_thread () from /lib64/libpthread.so.0
    #12 0x0000003c9e2e767d in clone () from /lib64/libc.so.6
    
    Thread 2 (Thread 0x7fa8a1226700 (LWP 26056)):
    #0  0x0000003c9e6094b3 in pthread_mutex_trylock () from /lib64/libpthread.so.0
    #1  0x0000000000410d10 in mutex_lock (hv=<value optimized out>)
        at memcached.h:493
    #2  item_lock (hv=<value optimized out>) at thread.c:127
    #3  0x0000000000410d7d in store_item (item=0x7fa89d0dddb8, comm=2, 
        c=0x7fa880118a00) at thread.c:598
    #4  0x000000000040bed4 in complete_nread_ascii (fd=<value optimized out>, 
        which=<value optimized out>, arg=0x7fa880118a00) at memcached.c:843
    #5  complete_nread (fd=<value optimized out>, which=<value optimized out>, 
        arg=0x7fa880118a00) at memcached.c:2248
    #6  drive_machine (fd=<value optimized out>, which=<value optimized out>, 
        arg=0x7fa880118a00) at memcached.c:3861
    ---Type <return> to continue, or q <return> to quit---
    #7  event_handler (fd=<value optimized out>, which=<value optimized out>, 
        arg=0x7fa880118a00) at memcached.c:4065
    #8  0x000000323cc06b44 in event_base_loop () from /usr/lib64/libevent-1.4.so.2
    #9  0x000000000041070d in worker_libevent (arg=0x1acf678) at thread.c:384
    #10 0x0000003c9e607851 in start_thread () from /lib64/libpthread.so.0
    #11 0x0000003c9e2e767d in clone () from /lib64/libc.so.6
    
    Thread 1 (Thread 0x7fa8a1c27700 (LWP 26055)):
    #0  0x0000003c9e6094b3 in pthread_mutex_trylock () from /lib64/libpthread.so.0
    #1  0x0000000000410d10 in mutex_lock (hv=<value optimized out>)
        at memcached.h:493
    #2  item_lock (hv=<value optimized out>) at thread.c:127
    #3  0x0000000000411069 in item_get (
        key=0x7fa881755f64 "4872_yhd.orders.get_1.0_0", nkey=25) at thread.c:498
    #4  0x000000000040731e in process_get_command (c=0x7fa88049e180, 
        tokens=0x7fa8a1c26bf0, ntokens=<value optimized out>, return_cas=false)
        at memcached.c:2725
    #5  0x0000000000409b63 in process_command (c=0x7fa88049e180, 
        command=<value optimized out>) at memcached.c:3249
    #6  0x000000000040a7e2 in try_read_command (c=0x7fa88049e180)
        at memcached.c:3504
    #7  0x000000000040b478 in drive_machine (fd=<value optimized out>, 
        which=<value optimized out>, arg=0x7fa88049e180) at memcached.c:3824
    ---Type <return> to continue, or q <return> to quit---
    #8  event_handler (fd=<value optimized out>, which=<value optimized out>, 
        arg=0x7fa88049e180) at memcached.c:4065
    #9  0x000000323cc06b44 in event_base_loop () from /usr/lib64/libevent-1.4.so.2
    #10 0x000000000041070d in worker_libevent (arg=0x1acc2f0) at thread.c:384
    #11 0x0000003c9e607851 in start_thread () from /lib64/libpthread.so.0
    #12 0x0000003c9e2e767d in clone () from /lib64/libc.so.6
    (gdb) 
    (gdb) 
    View Code

       It's easy to know that there is something wrong in thread 4, also know that the thread 4 got a lock of an item then other thread is block by the lock.

    (gdb) thread 4
    [Switching to thread 4 (Thread 0x7fa89fe24700 (LWP 26058))]#0  0x000000000040fd34 in assoc_find (key=<value optimized out>, nkey=<value optimized out>, 
        hv=<value optimized out>) at assoc.c:92
    92              if ((nkey == it->nkey) && (memcmp(key, ITEM_key(it), nkey) == 0)) {
    (gdb) bt
    #0  0x000000000040fd34 in assoc_find (key=<value optimized out>, 
        nkey=<value optimized out>, hv=<value optimized out>) at assoc.c:92
    #1  0x000000000040ef2e in do_item_get (key=0x7fa88008bd34 "B920818_0", 
        nkey=<value optimized out>, hv=3789230535) at items.c:523
    #2  0x0000000000411076 in item_get (key=0x7fa88008bd34 "B920818_0", nkey=9)
        at thread.c:499
    #3  0x000000000040731e in process_get_command (c=0x7fa88008bb30, 
        tokens=0x7fa89fe23bf0, ntokens=<value optimized out>, return_cas=false)
        at memcached.c:2725
    #4  0x0000000000409b63 in process_command (c=0x7fa88008bb30, 
        command=<value optimized out>) at memcached.c:3249
    #5  0x000000000040a7e2 in try_read_command (c=0x7fa88008bb30)
        at memcached.c:3504
    #6  0x000000000040b478 in drive_machine (fd=<value optimized out>, 
        which=<value optimized out>, arg=0x7fa88008bb30) at memcached.c:3824
    #7  event_handler (fd=<value optimized out>, which=<value optimized out>, 
        arg=0x7fa88008bb30) at memcached.c:4065
    #8  0x000000323cc06b44 in event_base_loop () from /usr/lib64/libevent-1.4.so.2
    (gdb) p it
    $1 = (item *) 0x7fa89799e5b0
    (gdb) p *it
    $2 = {next = 0x7fa89d0646c0, prev = 0x7fa8979b0760, h_next = 0x7fa89799e5b0, time = 31672441, exptime = 31758841, nbytes = 10, refcount = 1, nsuffix = 10 '
    ', 
      it_flags = 3 '03', slabs_clsid = 2 '02', nkey = 34 '"', data = 0x7fa89799e5b0}
    (gdb) 

        so the it->h_next is point to itself, then the deadloop happend. 

        i have check all the code that modify the value it->h_next, (the assoc_insert/assoc_deleteassoc_maintenance_thread function), whenever it is modified, it  have to get the lock "cache_lock". so i can't find the reason why an item pointed to itself.

       more infomation:  

    In thread 4:

    si r8 --> nkey=9

    di r9 --> key ="B920818_0"
    hv = 0xe1db11c7,  so offset in primary_hashtable is=0x111c7

    (gdb) p primary_hashtable
    $22 = (item **) 0x7fa8840008c

    so i get the first item in the hash bucket.

    (gdb) x /10g 0x00007fa8840008c0+0x8*0x111c7
    0x7fa8840896f8: 0x00007fa890b46ef0

     and then the deadloop item's address is the second item in the hash bucket list. yes, there is hash conflict. 

    (gdb) x /10xg 0x00007fa890b46ef0
    0x7fa890b46ef0: 0x00007fa89d1884f0 0x00007fa89d1ee2b0
    0x7fa890b46f00: 0x00007fa89799e5b0 0x01f3710601f362f6
    0x7fa890b46f10: 0x0307000100000003 0x0000000000001501
    0x7fa890b46f20: 0x000000030ed02eb6 0x4544524f5f594247
    0x7fa890b46f30: 0x3632343532325f52 0x332032305f383533
    (gdb)

    the item in the hash bucket list is: 

    (gdb) p it
    $1 = (item *) 0x7fa89799e5b0
    (gdb) p *it
    $2 = {next = 0x7fa89d0646c0, prev = 0x7fa8979b0760, h_next = 0x7fa89799e5b0, time = 31672441, exptime = 31758841, nbytes = 10, refcount = 1, nsuffix = 10 '
    ', 
      it_flags = 3 '03', slabs_clsid = 2 '02', nkey = 34 '"', data = 0x7fa89799e5b0}
    (gdb)
    (gdb) p *(item *)0x00007fa89d1884f0
    $12 = {next = 0x7fa890b4f110, prev = 0x7fa890b46ef0, h_next = 0x0, time = 32727785, exptime = 32731385, nbytes = 3, refcount = 1, nsuffix = 7 'a', 
      it_flags = 3 '03', slabs_clsid = 1 '01', nkey = 21 '25', data = 0x7fa89d1884f0}
    (gdb) 



    item[1]:"71912_yhd.serial.product.get_1.0_0 16384 8 " nsuffix=10; slab 2; nkey=34; nbytes=10

    (little endian)
    (gdb) x /100xb 0x00007fa89799e5b0
    0x7fa89799e5b0: 0xc0 0x46 0x06 0x9d 0xa8 0x7f 0x00 0x00
    0x7fa89799e5b8: 0x60 0x07 0x9b 0x97 0xa8 0x7f 0x00 0x00
    0x7fa89799e5c0: 0xb0 0xe5 0x99 0x97 0xa8 0x7f 0x00 0x00
    0x7fa89799e5c8: 0x79 0x48 0xe3 0x01 0xf9 0x99 0xe4 0x01
    0x7fa89799e5d0: 0x0a 0x00 0x00 0x00 0x01 0x00 0x0a 0x03
    0x7fa89799e5d8: 0x02 0x22 0x00 0x00 0x00 0x00 0x00 0x00
    0x7fa89799e5e0: 0x6d 0x33 0x4f 0xd6 0x02 0x00 0x00 0x00
    0x7fa89799e5e8: 0x37 0x31 0x39 0x31 0x32 0x5f 0x79 0x68
    0x7fa89799e5f0: 0x64 0x2e 0x73 0x65 0x72 0x69 0x61 0x6c
    0x7fa89799e5f8: 0x2e 0x70 0x72 0x6f 0x64 0x75 0x63 0x74
    0x7fa89799e600: 0x2e 0x67 0x65 0x74 0x5f 0x31 0x2e 0x30
    0x7fa89799e608: 0x5f 0x30 0x20 0x20 0x31 0x36 0x33 0x38
    0x7fa89799e610: 0x34 0x20 0x38 0x0d 0x0a 0x00 [0x00 0x00   8 "
    0x7fa89799e618: 0x00 0x00 0x00 0x00 0x2d 0x0d] 0x0a 0x0a
    0x7fa89799e620: 0x08 0xe1 0x0d 0x0a 0x0d 0x70 0x0d 0x0a
    (key

     

    item[0]:"Y_ORDER_225426358_02 32 1 1 513 " nssuffix=7,slab=1,nkey=21,nbytes=3
    item[1]:"71912_yhd.serial.product.get_1.0_0  16384 8 "   nsuffix=10;  slab 2; nkey=34; nbytes=10
    when the deadloop happened, it is looking for key "B920818_0". and the key is not in the list(may be it is).

    it is odd that the item[1] have no data, although the nbyte is 10!

    ----------------

      i guess that the bug occurred when hash expanding.  the hashpower=17(the default is 16), and   hash_items=101025, the hash expand when  hash_items>98304. It is very possible that the dealoop hapened after expanding, and then all the thread is hanged.

    (gdb) p expand_bucket
    $16 = 65536
    (gdb) p stats.hash_is_expanding
    $17 = false
    (gdb) p hash_items
    $18 = 101025
    (gdb) p 256*256*3/2
    $19 = 98304

    (gdb) p expanding

    $20 = false
    (gdb)

    (gdb) p hashpower
    $21 = 17
    (gdb)

    
    

        i can't not find the bug in the code, so if any guys have any suggestion, please tell me.

        thanks a lot.

     

  • 相关阅读:
    [HDU] 1016 Prime Ring Problem(DFS)
    ACM、OI等比赛中的程序对拍问题
    [POJ] 1948 Triangular Pastures (DP)
    [POJ] 1606 Jugs(BFS+路径输出)
    [百度2015春季实习生招聘附加题] 今天要吃点好的!
    Idea 搭建Maven--web项目(MVC)
    Python自动化测试框架——数据驱动(从文件中读取)
    Python自动化测试框架——数据驱动(从代码中读取)
    selenium——操作滚动条
    Python自动化测试框架——生成测试报告
  • 原文地址:https://www.cnblogs.com/zwCHAN/p/3812010.html
Copyright © 2011-2022 走看看