zoukankan      html  css  js  c++  java
  • scikit-learn:4.7. Pairwise metrics, Affinities and Kernels

    參考:http://scikit-learn.org/stable/modules/metrics.html


    The sklearn.metrics.pairwise submodule implements utilities to evaluate pairwise distances(样本对的距离) or affinity of sets of samples(样本集的相似度)。

    Distance metrics are functions d(a, b) such that d(a, b) < d(a, c) if objects a and b are considered “more similar” than objects a and c

    Kernels are measures of similarity, i.e. s(a, b) > s(a, c) if objects a and b are considered “more similar” than objects a and c


    1、Cosine similarity

    向量点积的L2-norm:

    if x and y are row vectors, their cosine similarity k is defined as:


    This kernel is a popular choice for computing the similarity of documents represented as tf-idf vectors.


    2、Linear kernel

    If x and y are column vectors, their linear kernel is:

    k(x, y) = x_transport * y


    3、Polynomial kernel

    Conceptually, the polynomial kernels considers not only the similarity between vectors under the same dimension, but also across dimensions. When used in machine learning algorithms, this allows to account for feature interaction.

    The polynomial kernel is defined as:



    4、Sigmoid kernel

    defined as:





    5、RBF kernel

    defined as:



    If  the kernel is known as the Gaussian kernel of variance .



    6、Chi-squared kernel

    defined as:


    The chi-squared kernel is a very popular choice for training non-linear SVMs in computer vision applications. It can be computed usingchi2_kernel and then passed to an sklearn.svm.SVC with kernel="precomputed":

    >>>
    >>> from sklearn.svm import SVC
    >>> from sklearn.metrics.pairwise import chi2_kernel
    >>> X = [[0, 1], [1, 0], [.2, .8], [.7, .3]]
    >>> y = [0, 1, 0, 1]
    >>> K = chi2_kernel(X, gamma=.5)
    >>> K                        
    array([[ 1.        ,  0.36...,  0.89...,  0.58...],
           [ 0.36...,  1.        ,  0.51...,  0.83...],
           [ 0.89...,  0.51...,  1.        ,  0.77... ],
           [ 0.58...,  0.83...,  0.77... ,  1.        ]])
    
    >>> svm = SVC(kernel='precomputed').fit(K, y)
    >>> svm.predict(K)
    array([0, 1, 0, 1])
    

    It can also be directly used as the kernel argument:

    >>>
    >>> svm = SVC(kernel=chi2_kernel).fit(X, y)
    >>> svm.predict(X)
    array([0, 1, 0, 1])


  • 相关阅读:
    读后感之—寒门学子重要选择-程序员
    架构中的分而治之
    如何从码农进化到项目管理者
    饿了么架构
    简单理解支付宝和蚂蚁花呗的架构
    架构小谈之美团外卖
    漫谈架构总结之1500
    平台基本信息项目目标文档
    第六学期每周总结-第三周
    质量管理之可用性战术分析
  • 原文地址:https://www.cnblogs.com/yangykaifa/p/7136902.html
Copyright © 2011-2022 走看看