zoukankan      html  css  js  c++  java
  • Simple Hierarchical clustering in Python 2.7 using SciPy

    Code snippets

    Code snippets  

    Simple Hierarchical clustering in Python 2.7 using SciPy

    I've found that there's not a lot of useful information on how to do Hierarchical clustering in SciPy, which is rather easy. First, you need to organise your data as an array with each column being a dimension, and each row being an observation. Here's an example with nine observations each with three dimensions.

    data  = [[0.1,0.1,0.1],
            [0.1,0.1,0.1],
            [0.1,0.1,0.1],
            [0.2,0.2,0.2],
            [0.2,0.2,0.2],
            [0.2,0.2,0.2],
            [0.3,0.3,0.3],
            [0.3,0.3,0.3],
            [0.3,0.3,0.3],]
    

    We need to create a distance matrix (calculate the distance between each pair of observations). I'm using the default (euclidian) distance metric (the SciPy documentation for spatial.distance.pdist gives more information on difference distance metrics you can use).

    from scipy import spatial
    distance = spatial.distance.pdist(data)
    

    Next, we need to calculate the linkage; the SciPy documentation has information on other built-in methods. I'm using the fastcluster package to speed things up (it's a drop in replacement for SciPy's cluster module).

    import fastcluster
    linkage = fastcluster.linkage(distance,method="complete")
    

    linkage is a list containing the instructions to merge clusters together starting with each observation being its own cluster and ending in everything being one cluster. There's a plot.dendrogram method which will plot this for you, but if we wanted to get the members when there are n clusters (let's say that we want 3 in this case) then you have to do the following.

    # We now iterate over the linkage object, merging clusters together until there are clusternum clusters left.
    clusternum = 3
    clustdict = {i:[i] for i in xrange(len(linkage)+1)}
    for i in xrange(len(linkage)-clusternum+1):
        clust1= int(linkage[i][0])
        clust2= int(linkage[i][1])
        clustdict[max(clustdict)+1] = clustdict[clust1] + clustdict[clust2]
        del clustdict[clust1], clustdict[clust2]
    

    If we print clustdict, the keys refer to the cluster number, and the values are the members of said cluster (in the form of indices of the initial data array)

    print clustdict
    >>> {10: [2, 0, 1], 12: [5, 3, 4], 14: [8, 6, 7]}
    

    Ta da! As we can see from the really synthetic data I supplied, the clustering works wonderfully. I've been doing this with 10,000 observations of 100 dimensional data and it does the entire thing in about 10 seconds on an Intel 2.3Ghz Core i5

  • 相关阅读:
    css绘制各种图形,三角形,长方形,梯形
    函数中,对形参做不加var的全局溢出赋值,可改变形参所指向的实参的本身值
    求数组中最大值,最小值
    placeholder 效果的实现,input提示字,获取焦点时消失
    js里apply用法
    jquery.lazyload.js-v1.9.1延时加载插件,已兼容ie6和各大浏览器
    移动端 元素外面使用伪类after加边框 导致其内部元素无法选中
    element组件知识点总结
    常用样式总结
    深入理解iframe
  • 原文地址:https://www.cnblogs.com/lexus/p/2815777.html
Copyright © 2011-2022 走看看