zoukankan      html  css  js  c++  java
  • Python for Data Science

    Chapter 4 - Clustering Models

    Segment 2 - Hierarchical methods

    Hierarchical Clustering

    Hierarchical clustering methods predict subgroups within data by finding the distance between each data point and its nearest neighbors, and then linking the most nearby neighbors.

    The algorithm uses the distance metric it calculates to predict subgroups.

    To guess the number of subgroups in a dataset, first look at a dendrogram visualization of the clustering results.

    Hierarchical Clustering Dendrogram

    Dendrogram: a tree graph that's useful for visually displaying taxonomies, lineages, and relatedness

    Hierarchical Clustering Use Cases

    • Hospital Resource Management
    • Customer Segmentation
    • Business Process Management
    • Social Network Analysis

    Hierarchical Clustering Parameters

    Distance Metrics

    • Euclidean
    • Manhattan
    • Cosine

    Linkage Parameters

    • Ward
    • Complete
    • Average

    Parameter selection method: use trial and error

    Setting up for clustering analysis

    import numpy as np
    import pandas as pd
    
    import matplotlib.pyplot as plt
    from pylab import rcParams
    import seaborn as sb
    
    import sklearn
    import sklearn.metrics as sm
    
    from sklearn.cluster import AgglomerativeClustering
    
    import scipy
    from scipy.cluster.hierarchy import dendrogram, linkage
    from scipy.cluster.hierarchy import fcluster
    from scipy.cluster.hierarchy import cophenet
    from scipy.spatial.distance import pdist
    
    np.set_printoptions(precision=4, suppress=True)
    plt.figure(figsize=(10, 3))
    %matplotlib inline
    plt.style.use('seaborn-whitegrid')
    
    address = '~/Data/mtcars.csv'
    
    cars = pd.read_csv(address)
    cars.columns = ['car_names','mpg','cyl','disp', 'hp', 'drat', 'wt', 'qsec', 'vs', 'am', 'gear', 'carb']
    
    X = cars[['mpg','disp','hp','wt']].values
    
    y = cars.iloc[:,(9)].values
    

    Using scipy to generate dendrograms

    Z = linkage(X, 'ward')
    
    dendrogram(Z, truncate_mode='lastp', p=12, leaf_rotation=45., leaf_font_size=15, show_contracted=True)
    
    plt.title('Truncated Hierarchial Clustering Diagram')
    plt.xlabel('Cluster Size')
    plt.ylabel('Distance')
    
    plt.axhline(y=500)
    plt.axhline(y=100)
    plt.show()
    

    ML0402 output_7_0

    Generating hierarchical clusters

    k = 2
    
    Hclustering = AgglomerativeClustering(n_clusters=k, affinity='euclidean', linkage='ward')
    Hclustering.fit(X)
    
    sm.accuracy_score(y, Hclustering.labels_)
    
    0.78125
    
    Hclustering = AgglomerativeClustering(n_clusters=k, affinity='euclidean', linkage='average')
    Hclustering.fit(X)
    
    sm.accuracy_score(y, Hclustering.labels_)
    
    0.78125
    
    Hclustering = AgglomerativeClustering(n_clusters=k, affinity='manhattan', linkage='average')
    Hclustering.fit(X)
    
    sm.accuracy_score(y, Hclustering.labels_)
    
    0.71875
  • 相关阅读:
    Vector(同步)和ArrayList(异步)异同
    集合框架(1)
    如何优化limit
    Mysql5大引擎之间的区别和优劣之分
    差分约束 poj 3169
    最大权森林 poj 3723
    次短路 poj 3255
    P1604 B进制星球 (高精度进制计算器)
    最小生成树入门 kruskal和堆优化的prim
    并查集入门 POJ 1182(带权并查集)
  • 原文地址:https://www.cnblogs.com/keepmoving1113/p/14320060.html
Copyright © 2011-2022 走看看