zoukankan      html  css  js  c++  java
  • 机器学习篇:sklearn.model_selection

    sklearn提供了许多包来进行机器学习,只是很多不去了解的话,到使用的时候就会手忙脚乱根本不会去用,所以这里整理一下,这里整理的顺序是个人想要了解的顺序。

    在一开始对这个工具毫无概念的话,可以尝试阅读:User Guide,一般浏览器(如谷歌)提供页面翻译成中文的方法,当对某些概念不明确,可换回英文再看看。

    0、整体

    2、sklearn.model_selection

    sklearn有很完善的官方文档(sklearn.model_selection)以及使用指南(3. Model selection and evaluation),所以这里只是个人学习的记录,也是跟着官方文档进行了解。

    2.1 Splitter Functions 拆分器功能

     

    2.1.1 train_test_split 拆分训练集测试集

    # train_test_split
    from sklearn.model_selection import train_test_split
    from sklearn.datasets import make_classification
    ​
    SEED = 666
    X,y = make_classification(n_samples=100, 
                              n_features=20, 
                              shuffle=True, 
                              random_state=SEED)
    ​
    print("拆分前:",X.shape,y.shape)
    X_train,y_train,X_test,y_test = train_test_split(X,y,test_size=0.25,random_state=SEED)
    print("拆分后:",X_train.shape,y_train.shape,X_test.shape,y_test.shape)

    2.1.2 check_cv 简单进行五折拆分数据集

    • check_cv返回的是一个KFold实例

    • check_cv拆分后的顺序是没有打乱的,譬如100个样本拆分五折会默认分成五份,其下标固定为(0,19)(20,39)(40,59),(60,79)(80,99)

    # check_cv
    from sklearn.model_selection import check_cv
    from sklearn.datasets import make_classification
    ​
    SEED = 666
    X,y = make_classification(n_samples=100, 
                              n_features=20, 
                              shuffle=True, 
                              random_state=SEED)
    ​
    print("拆分前:",X.shape,y.shape)
    aKFold = check_cv(cv=5, y=y, classifier=False) #返回的是一个KFold实例
    for train_index, test_index in aKFold.split(X):
        # train_index, test_index返回的是下标
        #print("%s %s" % (train_index, test_index))
        X_train,y_train,X_test,y_test = X[train_index],y[train_index],X[test_index],y[test_index]
        print("拆分后:",X_train.shape,y_train.shape,X_test.shape,y_test.shape)
        

    2.2 Splitter Classes 拆分器类

     

     

     

    这里有15个数据集拆分器,为了灵活地应对各种拆分需求,各种拓展看着我头疼,甚至一度怀疑我这样子是不是在浪费时间。有时候其实只有在有应用需求的时候才会明白为什么需要这个拆分器。所以进行以下的分类,从简单的开始。

     

     

    2.2.1 K折拆分--KFold

    • 默认是五折拆分,不打乱顺序,不放回

    • shuffle=True后则是不固定的五折拆分,需要设置随机种子random_state以进行复现

       

    # KFold 
    #K折交叉验证,即将原数据集分成K份,每一折将其中一份作为测试集,其他k-1份作为训练集
    # 随机的多折拆分(默认五折拆分),shuffle=True会打乱训练集测试集
    from sklearn.model_selection import KFold
    from sklearn.datasets import make_classification
    ​
    SEED = 666
    X,y = make_classification(n_samples=100, 
                              n_features=20, 
                              shuffle=True, 
                              random_state=SEED)
    ​
    print("拆分前:",X.shape,y.shape)
    aKFold = KFold(n_splits=5, shuffle=True, random_state=SEED) #返回的是一个KFold实例,shuffle=True则不是固定的下标
    for train_index, test_index in aKFold.split(X):
        # train_index, test_index返回的是下标
        print("%s %s" % (train_index, test_index))
        X_train,y_train,X_test,y_test = X[train_index],y[train_index],X[test_index],y[test_index]
        print("拆分后:",X_train.shape,y_train.shape,X_test.shape,y_test.shape)

    2.2.2 K折拆分--GroupKFold

    • GroupKFold(n_splits=5):返回一个GroupKFold实例

    • GroupKFold.get_n_splits(self, X=None, y=None, groups=None):返回拆分的折数

    • split(self, X, y=None, groups=None),返回拆分结果index的迭代器, 会根据传入的第三个参数groups来拆分数据集X,y,使得拆分后分类比例不变

    # GroupKFold
    # 简单的多折拆分(默认五折拆分),需要传入groups,会根据传入groups使得每个groups在训练集测试集的比例不变,与Stratified类似
    ​
    ​
    from sklearn.model_selection import GroupKFold
    import numpy as np
    import pandas as pd
    from sklearn.datasets import make_classification
    #help(GroupKFold)
    ​
    SEED = 666
    X,y = make_classification(n_samples=100, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=2,
                              n_clusters_per_class=1,
                              n_informative =18,
                              weights=[0.1,0.9],
                              random_state=SEED)
    ​
    print("拆分前:",X.shape,y.shape)
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())
    ​
    ​
    group_kfold = GroupKFold(n_splits=2) #n_splits要与传入groups的分类数相符
    #group_kfold.get_n_splits(X, y, y)
    # 会根据传入的第三个参数groups来拆分数据集X,y,传入了分类标签y所以会将二分类数据按照0,1拆开
    for train_index, test_index in group_kfold.split(X, y, groups = y):
        print("拆分--------------------------------------------------")
        print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
        print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())
        print("TRAIN:", train_index, "TEST:", test_index)
        X_train, X_test = X[train_index], X[test_index]
        y_train, y_test = y[train_index], y[test_index]
        print("拆分后:",X_train.shape,y_train.shape,X_test.shape,y_test.shape)

    2.2.3 K折拆分--StratifiedKFold

    • 生成测试集,使所有包含相同的类分布,或尽可能接近。

    • 是不变的类别标签:重贴标签到 不应该改变所产生的指标。y = ["Happy", "Sad"]``y = [1, 0]

    • 保留数据集排序中的顺序依赖性,当 shuffle=False:某些测试集中来自类 k 的所有样本在 y 中是连续的,或者在 y 中被来自除 k 之外的类的样本分隔。

    • 生成测试集,其中最小和最大最多相差一个样本。

    # StratifiedKFold
    # 相比于KFold,在进行split的时候需要传入y,并且会根据y的分类,保证分类后y在各个数据集中比例不变,类似于GroupKFold(基于参数groups)
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=200, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=3,
                              n_clusters_per_class=1,
                              n_informative =18,
                              random_state=SEED)
    ​
    skf = StratifiedKFold(n_splits=5, shuffle=False, random_state=None)
    ​
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())
    for train_idx, test_idx in skf.split(X,y):
        print("拆分--------------------------------------------------")
        print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
        print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())
     

    2.2.4 K折拆分--StratifiedGroupKFold

    # StratifiedGroupKFold
    # 在进行split的时候需要传入X,y和groups,观察其结果,似乎只取决于传入的group,group的长度取决于X、y的长度,分类数最好与n_splits相同
    ​
    ​
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=30, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=2,
                              n_clusters_per_class=1,
                              n_informative =18,
                              random_state=SEED)
    ​
    sgk = StratifiedGroupKFold(n_splits=3, shuffle=False, random_state=None)
    ​
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())
    groups =  np.hstack((np.zeros(10),np.ones(10),np.ones(10)+1)) 
    ​
    for train_idx, test_idx in sgk.split(X,y,groups):
        print("TRAIN:", train_idx, "TEST:", test_idx)
        print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
        print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())

    2.2.5 K折拆分--RepeatedKFold

    # RepeatedKFold
    #重复n_repeats次n_splits折的KFold拆分,最后拆分的次数应该是n_splits*n_repeats
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=20, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=6,
                              n_clusters_per_class=1,
                              n_informative =18,
                              weights=[0.1,0.5,0.1,0.1,0.1,0.1],
                              random_state=SEED)
    ​
    ​
    #重复n_repeats次n_splits折的拆分,最后拆分的次数应该是n_splits*n_repeats
    rkf = RepeatedKFold(n_splits=4, n_repeats=2, random_state=666)
    ​
    ​
    ​
    ​
    for train_idx, test_idx in rkf.split(X):
        print("TRAIN:", train_idx, "TEST:", test_idx)

    2.2.6 K折拆分--RepeatedStratifiedKFold

    # RepeatedStratifiedKFold
    # 重复n_repeats次n_splits折的StratifiedKFold拆分,最后拆分的次数应该是n_splits*n_repeats
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=30, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=2,
                              n_clusters_per_class=1,
                              n_informative =18,
                              random_state=SEED)
    ​
    #
    rskf = RepeatedStratifiedKFold(n_splits=3, n_repeats=2, random_state=SEED)
    ​
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())
    ​
    ​
    for train_idx, test_idx in rskf.split(X,y):
        print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
        print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())

    2.2.7 随机拆分--ShuffleSplit

    # ShuffleSplit
    # 相比于K折拆分,ShuffleSplit可指定拆分数据集的次数及每次拆分数据集的测试集比例
    # 可指定拆分次数和测试集比例,需要指定random_state才可以复现数据
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=100, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=6,
                              n_clusters_per_class=1,
                              n_informative =18,
                              weights=[0.1,0.5,0.1,0.1,0.1,0.1],
                              random_state=SEED)
    ​
    ​
    #
    ss = ShuffleSplit(n_splits=10, test_size=None, train_size=None, random_state=None)
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())
    ​
    ​
    ​
    # 完全是按照groups的参数进行的拆分
    for train_idx, test_idx in ss.split(X, y):
        print("拆分--------------------------------------------------")
        print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
        print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())

    2.2.8 随机拆分--GroupShuffleSplit

    # GroupShuffleSplit
    # 可指定拆分次数和测试集比例,需要传入groups,按照分组拆分
    import numpy as np
    import pandas as pd
    from matplotlib import pyplot as plt
    from sklearn.model_selection import GroupShuffleSplit
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=100, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=4,
                              n_clusters_per_class=1,
                              n_informative =18,
                              weights=[0.1,0.6,0.2,0.1],
                              random_state=SEED)
    ​
    ​
    #
    gss = GroupShuffleSplit(n_splits=5, test_size=0.2, train_size=None, random_state=SEED)
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())
    ​
    # 完全是按照groups的参数进行的拆分
    for train_idx, test_idx in gss.split(X, y, groups=y):
        print("拆分--------------------------------------------------")
        print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
        print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())

    2.2.9 随机拆分--StratifiedShuffleSplit

    # StratifiedShuffleSplit
    # 可指定拆分次数和测试集比例,需要传入X、y,在划分后的数据集中y标签比例相似
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=200, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=3,
                              n_clusters_per_class=1,
                              n_informative =18,
                              random_state=SEED)
    ​
    skf = StratifiedShuffleSplit(n_splits=3, test_size=None, train_size=None, random_state=SEED)
    ​
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())
    for train_idx, test_idx in skf.split(X,y):
        print("拆分--------------------------------------------------")
        print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
        print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())

    2.2.10 留一法-- LeaveOneOut

    #### LeaveOneOut
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=100, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=6,
                              n_clusters_per_class=1,
                              n_informative =18,
                              weights=[0.1,0.5,0.1,0.1,0.1,0.1],
                              random_state=SEED)
    ​
    ​
    #
    logo = LeaveOneOut()
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())
    ​
    ​
    ​
    # 完全是按照groups的参数进行的拆分
    for train_idx, test_idx in logo.split(X, y):
        print("拆分--------------------------------------------------")
        print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
        print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())
    ​

    2.2.11 留一法-- LeaveOneGroupOut

    # LeaveOneGroupOut
    import numpy as np
    import pandas as pd
    from matplotlib import pyplot as plt
    from sklearn.model_selection import LeaveOneGroupOut
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=100, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=6,
                              n_clusters_per_class=1,
                              n_informative =18,
                              weights=[0.1,0.5,0.1,0.1,0.1,0.1],
                              random_state=SEED)
    ​
    ​
    #
    logo = LeaveOneGroupOut()
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())
    ​
    ​
    ​
    ​
    for train_idx, test_idx in logo.split(X, y, groups=y):
        print("拆分--------------------------------------------------")
        print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
        print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())


    2.2.12 留一法-- LeavePOut

    #### LeavePOut
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.datasets import make_classification


    SEED = 666

    X,y = make_classification(n_samples=100,
                             n_features=20,
                             shuffle=True,
                             n_classes=6,
                             n_clusters_per_class=1,
                             n_informative =18,
                             weights=[0.1,0.5,0.1,0.1,0.1,0.1],
                             random_state=SEED)


    #
    logo = LeavePOut(10)
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())



    # 完全是按照groups的参数进行的拆分
    for train_idx, test_idx in logo.split(X, y):
       print("拆分--------------------------------------------------")
       print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
       print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())

    2.2.13 留一法-- LeavePGroupsOut

    #  LeavePGroupsOut
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=100, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=6,
                              n_clusters_per_class=1,
                              n_informative =18,
                              weights=[0.1,0.5,0.1,0.1,0.1,0.1],
                              random_state=SEED)
    ​
    ​
    #
    logo = LeavePGroupsOut(2)
    print("拆分前的数据")
    print(pd.DataFrame(y).value_counts())
    ​
    ​
    ​
    ​
    for train_idx, test_idx in logo.split(X, y, groups=y):
        print("拆分--------------------------------------------------")
        print("训练集数据:\n",pd.DataFrame(y[train_idx]).value_counts())
        print("测试集数据:\n",pd.DataFrame(y[test_idx]).value_counts())
    ​

    2.2.14 指定拆分--PredefinedSplit

    # PredefinedSplit
    #根据提前指定的分类来划分数据集,譬如说test_fold包含三类0、1、2,那么会拆分三次,每一次其中一类作为测试集,(-1对应的index永远在训练集)
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.datasets import make_classification
    ​
    ​
    SEED = 666
    ​
    X,y = make_classification(n_samples=100, 
                              n_features=20, 
                              shuffle=True, 
                              n_classes=6,
                              n_clusters_per_class=1,
                              n_informative =18,
                              weights=[0.1,0.5,0.1,0.1,0.1,0.1],
                              random_state=SEED)
    ​
    ​
    #
    test_fold = np.hstack((np.zeros(20),np.ones(40),np.ones(10)+1,np.zeros(30)-1)) #分为三类0,1,2,设置为-1的样本永远包含在测试集中
    print(test_fold)
    pres = PredefinedSplit(test_fold)
    ​
    ​
    ​
    for train_idx, test_idx in pres.split():
        print("TRAIN:", train_idx, "TEST:", test_idx)

    2.2.15 时间窗口拆分--TimeSeriesSplit

    时间序列的拆分。

    # TimeSeriesSplit
    # 时间序列拆分,类似于滑动窗口模式,以前n个样本作为训练集,第n+1个样本作为测试集
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    ​
    ​
    ​
    X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
    y = np.array([1, 2, 3, 4, 5, 6])
    ​
    tss = TimeSeriesSplit(n_splits=5,  max_train_size=None, test_size=None, gap=0)
    ​
    for train_idx, test_idx in tss.split(X):
        print("TRAIN:", train_idx, "TEST:", test_idx)
    '''
    TRAIN: [0] TEST: [1]
    TRAIN: [0 1] TEST: [2]
    TRAIN: [0 1 2] TEST: [3]
    TRAIN: [0 1 2 3] TEST: [4]
    TRAIN: [0 1 2 3 4] TEST: [5]
    '''

     

    2.2.x 附件笔记

    # 15个
    #---------------------------K折验证------------------------------------
    #K折交叉验证,即将原数据集分成K份,每一折将其中一份作为测试集,其他k-1份作为训练集
    # 随机的多折拆分(默认五折拆分),shuffle=True会打乱训练集测试集
    KFold(n_splits=5, shuffle=True, random_state=SEED)
    for train_index, test_index in aKFold.split(X)
    ​
    ​
    # 简单的多折拆分(默认五折拆分),需要传入groups,会根据传入groups使得每个groups在训练集测试集的比例不变
    GroupKFold(n_splits=5)
    for train_index, test_index in group_kfold.split(X, y, groups = y)
    ​
    # 相比于KFold,在进行split的时候需要传入y,并且会根据y的分类,保证分类后y在各个数据集中比例不变,类似于GroupKFold(基于参数groups)
    StratifiedKFold(n_splits=5, shuffle=False, random_state=None)
    ​
    # 在进行split的时候需要传入X,y和groups,观察其结果,似乎只取决于传入的group,group的长度取决于X、y的长度,分类数最好与n_splits相同
    StratifiedGroupKFold(n_splits=3, shuffle=False, random_state=None)
    ​
    #重复n_repeats次n_splits折的KFold拆分,最后拆分的次数应该是n_splits*n_repeats
    RepeatedKFold(n_splits=4, n_repeats=2, random_state=666)
    ​
    # 重复n_repeats次n_splits折的StratifiedKFold拆分,最后拆分的次数应该是n_splits*n_repeats
    RepeatedStratifiedKFold(n_splits=3, n_repeats=2, random_state=SEED)
    ​
    ​
    ​
    # -------------------ShuffleSplit----------------------------------
    # 相比于K折拆分,ShuffleSplit可指定拆分数据集的次数及每次拆分数据集的测试集比例
    # 可指定拆分次数和测试集比例,需要指定random_state才可以复现数据
    ShuffleSplit(n_splits=5, test_size=0.25, train_size=None, random_state=666)
    ​
    # 可指定拆分次数和测试集比例,需要传入X、y,在划分后的数据集中y标签比例相似
    StratifiedShuffleSplit(n_splits=3, test_size=None, train_size=None, random_state=SEED)
    ​
    ​
    # 可指定拆分次数和测试集比例,需要传入groups,按照分组拆分
    GroupShuffleSplit(n_splits=5, test_size=None, train_size=None, random_state=None)
    for train_idx, test_idx in gss.split(X, y=None, groups=y)
    ​
    # -------------------------留一法-----------------------------------------
    # 留一法及其拓展留P法,即指定1(或者P)个样本(或组)作为测试集,其他样本(或组)做为训练集,拆分数由样本数决定,不必指定
    # 随机拆分的留一法,每次只会保留一个样本作为测试集,样本数为n则默认进行n-1次拆分
    LeaveOneOut()
    for train_idx, test_idx in logo.split(X)
    ​
    # 按组拆分的留一法,按照传入的groups分组,然后根据分组进行留一拆分
    LeaveOneGroupOut()
    for train_idx, test_idx in logo.split(X, y, groups=y):
    ​
     # 留一法的拓展,LeavePOut(1)与LeaveOneOut()是一样的
    LeavePOut(p)
    ​
    # 留一法的拓展,LeavePGroupsOut(1)与LeaveOneGroupOut()是一样的
    LeavePGroupsOut(p)
    ​
    ​
    #--------------------------指定拆分-----------------------------------------
    #根据提前指定的分类来划分数据集,譬如说test_fold包含三类0、1、2,那么会拆分三次,每一次其中一类作为测试集,(-1对应的index永远在训练集)
    test_fold = np.hstack((np.zeros(20),np.ones(40),np.ones(10)+1,np.zeros(30)-1)) #分为三类0,1,2,设置为-1的样本永远包含在测试集中
    pres = PredefinedSplit(test_fold)
    ​
    ​
    # -------------------------时间序列拆分-----------------------------------
    # 时间序列拆分,类似于滑动窗口模式,以前n个样本作为训练集,第n+1个样本作为测试集
    TimeSeriesSplit(n_splits=5,  max_train_size=None, test_size=None, gap=0)
    ​


    2.3 Model validation 模型验证

     

    2.3.1 cross_val_score

    cross_val_score是最简单的模型验证的方法,可以传入需要验证的模型estimator,数据集X,数据标签列y,可自定义交叉验证数据集拆分规则cv,也可以自定义返回的分数计算方式scoring。

    scoring可自定义,或者参照3.3.1. 该scoring参数:定义模型评估规则传入对应分数计算的名称,这里列出来了常用的一些指标。

    分类问题中,除了准确率、精确率、召回率,二分类常用'f1',多分类常用'f1_micro'和'f1_macro'。

    结果只会返回一个numpy.ndarray,即模型验证得分。

    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.svm import *
    from sklearn.datasets import load_iris
    ​
    ​
    iris_data_bunch = load_iris()
    X = iris_data_bunch.data
    y = iris_data_bunch.target
    ​
    '''
    cross_val_score(
        estimator, #一个支撑了fit方法的estimator
        X, #数据特征集
        y=None, #数据标签列
        groups=None, #groups参数用于传递给拆分数据集的split方法
        scoring=None, #可自定义scorer(estimator, X, y)方法,或者某个字符串,参考官方文档
        cv=None, #数据集拆分参数,默认 KFold或StratifiedKFold策略(是否传入y、groups),可自己传入对应的拆分器后者自定义拆分器
        n_jobs=None, 
        verbose=0, 
        fit_params=None, 
        pre_dispatch='2*n_jobs', 
        error_score=nan
    )
    '''
    ​
    clf = SVC(kernel='linear', C=1, random_state=666)
    scores = cross_val_score(clf, X, y, cv=5,scoring='f1_micro')
    # 输出五折交叉验证的每一折的分数,numpy.ndarray
    scores

    2.3.2 cross_validate 函数和多指标评估

    cross_validate功能的区别在于cross_val_score两个方面:

    • 它允许指定多个评估指标。

    • 除了测试分数之外,它还返回一个包含拟合时间、分数时间(以及可选的训练分数和拟合估计量)的字典。

    分类问题的评价指标是准确率,那么回归算法的评价指标就是MSE,RMSE,MAE、R-Squared,scroing依旧参照3.3.1. 该scoring参数:定义模型评估规则


    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.svm import *
    from sklearn.datasets import load_diabetes
    ​
    diabetes_data_bunch = load_diabetes()
    X = diabetes_data_bunch.data
    y = diabetes_data_bunch.target
    ​
    ​
    '''
    cross_validate(
        estimator, #一个支撑了fit方法的estimator
        X, #数据特征集
        y=None, #数据标签列
        groups=None, #groups参数用于传递给拆分数据集的split方法
        scoring=None, #可自定义scorer(estimator, X, y)方法,或者某个字符串,参考官方文档,【可传入多个分数】
        cv=None, #数据集拆分参数,默认 KFold或StratifiedKFold策略(是否传入y、groups),可自己传入对应的拆分器后者自定义拆分器
        n_jobs=None, 
        verbose=0, 
        fit_params=None, 
        pre_dispatch='2*n_jobs', 
        return_train_score=False, #是否返回训练集的分数
        return_estimator=False, #是否返回每一折训练后的模型
        error_score=nan
    )
    ​
    '''
    ​
    scoring = ['neg_mean_squared_error','neg_root_mean_squared_error','neg_mean_absolute_error','r2']
    ​
    clf = SVR(kernel='linear', C=1)
    ​
    scores = cross_validate(clf, X, y, cv=5,scoring=scoring,return_train_score=True,return_estimator=True)
    
    scores
     

    2.3.3 cross_val_predict 通过交叉验证获得预测结果

    该功能cross_val_predict适用于:

    • 从不同模型获得的预测的可视化。

    • 模型混合:当一个监督估计器的预测用于在集成方法中训练另一个估计器时。

    import numpy as np
    import pandas as pd
    from sklearn.model_selection import *
    from sklearn.svm import *
    from sklearn.datasets import load_iris
    iris_data_bunch = load_iris()
    X = iris_data_bunch.data
    y = iris_data_bunch.target
    ​
    '''
    cross_val_predict(
        estimator, #一个支撑了fit方法的estimator
        X, #数据特征集
        y=None, #数据标签列
        groups=None, #groups参数用于传递给拆分数据集的split方法
        cv=None, #数据集拆分参数,默认 KFold或StratifiedKFold策略(是否传入y、groups),可自己传入对应的拆分器后者自定义拆分器
        n_jobs=None, 
        verbose=0, 
        fit_params=None, 
        pre_dispatch='2*n_jobs', 
        method='predict' #{'predict', 'predict_proba', 'predict_log_proba','decision_function'}
    )
    '''
    ​
    clf = SVC(kernel='linear', C=1, random_state=666)
    y_pred = cross_val_predict(clf, X, y, cv=5)
    # 输出对训练集的预测结果
    y_pred

    2.3.4 validation_curve 验证曲线

    绘制验证曲线有助于观察随着参数变化,训练集测试集分数的变化。

    import numpy as np
    from sklearn.model_selection import validation_curve
    from sklearn.datasets import load_iris
    from sklearn.linear_model import Ridge
    ​
    '''
    validation_curve(
        estimator, 
        X, 
        y, 
        param_name, 
        param_range, 
        groups=None, 
        cv=None, 
        scoring=None, 
        n_jobs=None, 
        pre_dispatch='all', 
        verbose=0, 
        error_score=nan, 
        fit_params=None
    )
    '''
    np.random.seed(0)
    X, y = load_iris(return_X_y=True)
    indices = np.arange(y.shape[0])
    np.random.shuffle(indices)
    X, y = X[indices], y[indices]
    ​
    param_name = 'alpha'
    param_range = np.logspace(-10, 1, 10)
    ​
    train_scores, valid_scores = validation_curve(
        Ridge(), X, y, param_name=param_name,param_range=param_range,
        cv=5)
    ​
    print('参数:',param_range)
    ​
    print('train_scores:',np.average(train_scores, axis=1))
    ​
    print("valid_scores:",np.average(valid_scores, axis=1))
    ​
    import matplotlib.pyplot as plt
    plt.plot(param_range,np.average(train_scores, axis=1))
    plt.plot(param_range,np.average(valid_scores, axis=1))
    plt.show()
     

     

    2.3.5 learning_curve 学习曲线

    learning_curve 绘制随着训练数据变化的训练结果情况。

    from sklearn.model_selection import learning_curve
    from sklearn.svm import SVC
    from sklearn.datasets import load_iris
    ​
    ​
    np.random.seed(0)
    X, y = load_iris(return_X_y=True)
    indices = np.arange(y.shape[0])
    np.random.shuffle(indices)
    X, y = X[indices], y[indices]
    ​
    train_sizes = [x for x in range(10,120)]
    ​
    train_sizes, train_scores, valid_scores = learning_curve(
        SVC(kernel='linear'), X, y, train_sizes=train_sizes, cv=5,scoring='f1_micro')
    ​
    #print('训练数据量:',train_sizes)
    #print('train_scores:',np.average(train_scores, axis=1))
    #print('valid_scores:',np.average(train_scores, axis=1))
    import matplotlib.pyplot as plt
    plt.plot(train_sizes,np.average(train_scores, axis=1),label='train_scores')
    plt.plot(train_sizes,np.average(valid_scores, axis=1),label='valid_scores')
    plt.legend(loc="right")
    plt.show()

     

    2.4 Hyper-parameter optimizers 超参数优化器

     

     

    scikit-learn 中提供了两种通用的参数搜索方法:对于给定的值,GridSearchCV详尽地考虑所有参数组合,同时RandomizedSearchCV可以从具有指定分布的参数空间中采样给定数量的候选者。这两个工具都有连续的减半对应物 HalvingGridSearchCVHalvingRandomSearchCV,可以更快地找到一个好的参数组合。

    2.4.1 GridSearchCV 穷举网格搜索

    使用例子参考cross_val_score 和 GridSearchCV 上的多指标评估演示GridSearchCV

    # GridSearchCV 
    '''
    GridSearchCV(estimator, 
                 param_grid, #参数字典
                 scoring=None, #评估指标
                 n_jobs=None,
                 refit=True, 
                 cv=None, #交叉验证折数
                 verbose=0, 
                 pre_dispatch='2*n_jobs', 
                 error_score=nan, 
                 return_train_score=False
                )
    '''
    # Author: Raghav RV <rvraghav93@gmail.com>
    # License: BSD
    import numpy as np
    from matplotlib import pyplot as plt
    ​
    from sklearn.datasets import make_hastie_10_2
    from sklearn.model_selection import GridSearchCV
    from sklearn.metrics import make_scorer
    from sklearn.metrics import accuracy_score
    from sklearn.tree import DecisionTreeClassifier
    ​
    ​
    # 二分类问题
    X, y = make_hastie_10_2(n_samples=8000, random_state=666)
    ​
    # The scorers can be either one of the predefined metric strings or a scorer
    # callable, like the one returned by make_scorer
    scoring = {"AUC": "roc_auc", "Accuracy": make_scorer(accuracy_score)}
    ​
    # Setting refit='AUC', refits an estimator on the whole dataset with the
    # parameter setting that has the best cross-validated AUC score.
    # That estimator is made available at ``gs.best_estimator_`` along with
    # parameters like ``gs.best_score_``, ``gs.best_params_`` and
    # ``gs.best_index_``
    gs = GridSearchCV(
        DecisionTreeClassifier(random_state=42),
        param_grid={"min_samples_split": range(2, 403, 10)},
        scoring=scoring,
        refit="AUC",
        return_train_score=True,
    )
    gs.fit(X, y)
    results = gs.cv_results_
    ​
    # 绘制结果
    plt.figure(figsize=(13, 13))
    plt.title("GridSearchCV evaluating using multiple scorers simultaneously", fontsize=16)
    ​
    plt.xlabel("min_samples_split")
    plt.ylabel("Score")
    ​
    # 挪动坐标轴
    ax = plt.gca()
    ax.set_xlim(0, 402)
    ax.set_ylim(0.73, 1)
    ​
    # Get the regular numpy array from the MaskedArray
    X_axis = np.array(results["param_min_samples_split"].data, dtype=float)
    ​
    for scorer, color in zip(sorted(scoring), ["g", "k"]):
        for sample, style in (("train", "--"), ("test", "-")):
            sample_score_mean = results["mean_%s_%s" % (sample, scorer)]
            sample_score_std = results["std_%s_%s" % (sample, scorer)]
            ax.fill_between(
                X_axis,
                sample_score_mean - sample_score_std,
                sample_score_mean + sample_score_std,
                alpha=0.1 if sample == "test" else 0,
                color=color,
            )
            ax.plot(
                X_axis,
                sample_score_mean,
                style,
                color=color,
                alpha=1 if sample == "test" else 0.7,
                label="%s (%s)" % (scorer, sample),
            )
    ​
        best_index = np.nonzero(results["rank_test_%s" % scorer] == 1)[0][0]
        best_score = results["mean_test_%s" % scorer][best_index]
    ​
        # Plot a dotted vertical line at the best score for that scorer marked by x
        ax.plot(
            [
                X_axis[best_index],
            ]
            * 2,
            [0, best_score],
            linestyle="-.",
            color=color,
            marker="x",
            markeredgewidth=3,
            ms=8,
        )
    ​
        # Annotate the best score for that scorer
        ax.annotate("%0.2f" % best_score, (X_axis[best_index], best_score + 0.005))
    ​
    plt.legend(loc="best")
    plt.grid(False)
    plt.show()

     

    2.4.2 RandomizedSearchCV 随机参数优化

    虽然使用参数设置网格是目前最广泛使用的参数优化方法,但其他搜索方法具有更有利的特性。 RandomizedSearchCV实现对参数的随机搜索,其中每个设置都是从可能参数值的分布中采样的。与详尽搜索相比,这有两个主要好处:

    • 可以独立于参数的数量和可能的值来选择预算。

    • 添加不影响性能的参数不会降低效率。

    指定应该如何采样参数是使用字典完成的,非常类似于为 指定参数GridSearchCV。此外,使用n_iter参数指定计算预算,即采样候选或采样迭代的数量。对于每个参数,可以指定可能值的分布或离散选择列表(将被均匀采样)

    tip:scipy.stats模块,该模块包含了许多有用的分布进行采样的参数,例如expongammauniformrandint

    例子参考:比较用于超参数估计的随机搜索和网格搜索比较了随机搜索和网格搜索

    # RandomizedSearchCV
    '''
    RandomizedSearchCV(estimator, 
                       param_distributions,  
                       n_iter=10, 
                       scoring=None, 
                       n_jobs=None, 
                       refit=True, 
                       cv=None, 
                       verbose=0, 
                       pre_dispatch='2*n_jobs', 
                       random_state=None, 
                       error_score=nan, 
                       return_train_score=False
    )
    '''
    import numpy as np
    ​
    from time import time
    import scipy.stats as stats
    from sklearn.utils.fixes import loguniform
    ​
    from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
    from sklearn.datasets import load_digits
    from sklearn.linear_model import SGDClassifier
    ​
    # get some data
    X, y = load_digits(return_X_y=True)
    ​
    # build a classifier
    clf = SGDClassifier(loss="hinge", penalty="elasticnet", fit_intercept=True)
    ​
    ​
    # Utility function to report best scores
    def report(results, n_top=3):
        for i in range(1, n_top + 1):
            candidates = np.flatnonzero(results["rank_test_score"] == i)
            for candidate in candidates:
                print("Model with rank: {0}".format(i))
                print(
                    "Mean validation score: {0:.3f} (std: {1:.3f})".format(
                        results["mean_test_score"][candidate],
                        results["std_test_score"][candidate],
                    )
                )
                print("Parameters: {0}".format(results["params"][candidate]))
                print("")
    ​
    ​
    # specify parameters and distributions to sample from
    param_dist = {
        "average": [True, False],
        "l1_ratio": stats.uniform(0, 1),
        "alpha": loguniform(1e-4, 1e0),
    }
    ​
    # run randomized search
    n_iter_search = 20
    random_search = RandomizedSearchCV(
        clf, param_distributions=param_dist, n_iter=n_iter_search
    )
    ​
    start = time()
    random_search.fit(X, y)
    print(
        "RandomizedSearchCV took %.2f seconds for %d candidates parameter settings."
        % ((time() - start), n_iter_search)
    )
    report(random_search.cv_results_)
    ​
    # use a full grid over all parameters
    param_grid = {
        "average": [True, False],
        "l1_ratio": np.linspace(0, 1, num=10),
        "alpha": np.power(10, np.arange(-4, 1, dtype=float)),
    }
    ​
    # run grid search
    grid_search = GridSearchCV(clf, param_grid=param_grid)
    start = time()
    grid_search.fit(X, y)
    ​
    print(
        "GridSearchCV took %.2f seconds for %d candidate parameter settings."
        % (time() - start, len(grid_search.cv_results_["params"]))
    )
    report(grid_search.cv_results_)

     

    2.4.3 HalvingGridSearchCV 连续减半

    HalvingGridSearchCVHalvingRandomSearchCV估计器仍处于试验阶段:它们的预测和它们的 API 可能会在没有任何弃用周期的情况下发生变化。要使用它们,您需要显式导入enable_halving_search_cv

    例子参考:

    # 2.4.3 HalvingGridSearchCV 连续减半
    '''
    HalvingGridSearchCV(
        estimator, 
        param_grid, 
        factor=3, 
        resource='n_samples', 
        max_resources='auto', 
        min_resources='exhaust', 
        aggressive_elimination=False, 
        cv=5, 
        scoring=None, 
        refit=True, 
        error_score=nan, 
        return_train_score=True, 
        random_state=None, 
        n_jobs=None, 
        verbose=0
    )
    '''
    from time import time
    ​
    import matplotlib.pyplot as plt
    import numpy as np
    import pandas as pd
    ​
    from sklearn.svm import SVC
    from sklearn import datasets
    from sklearn.model_selection import GridSearchCV
    from sklearn.experimental import enable_halving_search_cv  # noqa
    from sklearn.model_selection import HalvingGridSearchCV
    ​
    ​
    ​
    rng = np.random.RandomState(0)
    X, y = datasets.make_classification(n_samples=1000, random_state=rng)
    ​
    gammas = [1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 1e-7]
    Cs = [1, 10, 100, 1e3, 1e4, 1e5]
    param_grid = {"gamma": gammas, "C": Cs}
    ​
    clf = SVC(random_state=rng)
    ​
    tic = time()
    gsh = HalvingGridSearchCV(
        estimator=clf, param_grid=param_grid, factor=2, random_state=rng
    )
    gsh.fit(X, y)
    gsh_time = time() - tic
    ​
    tic = time()
    gs = GridSearchCV(estimator=clf, param_grid=param_grid)
    gs.fit(X, y)
    gs_time = time() - tic
    ​
    def make_heatmap(ax, gs, is_sh=False, make_cbar=False):
        """Helper to make a heatmap."""
        results = pd.DataFrame.from_dict(gs.cv_results_)
        results["params_str"] = results.params.apply(str)
        if is_sh:
            # SH dataframe: get mean_test_score values for the highest iter
            scores_matrix = results.sort_values("iter").pivot_table(
                index="param_gamma",
                columns="param_C",
                values="mean_test_score",
                aggfunc="last",
            )
        else:
            scores_matrix = results.pivot(
                index="param_gamma", columns="param_C", values="mean_test_score"
            )
    ​
        im = ax.imshow(scores_matrix)
    ​
        ax.set_xticks(np.arange(len(Cs)))
        ax.set_xticklabels(["{:.0E}".format(x) for x in Cs])
        ax.set_xlabel("C", fontsize=15)
    ​
        ax.set_yticks(np.arange(len(gammas)))
        ax.set_yticklabels(["{:.0E}".format(x) for x in gammas])
        ax.set_ylabel("gamma", fontsize=15)
    ​
        # Rotate the tick labels and set their alignment.
        plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
    ​
        if is_sh:
            iterations = results.pivot_table(
                index="param_gamma", columns="param_C", values="iter", aggfunc="max"
            ).values
            for i in range(len(gammas)):
                for j in range(len(Cs)):
                    ax.text(
                        j,
                        i,
                        iterations[i, j],
                        ha="center",
                        va="center",
                        color="w",
                        fontsize=20,
                    )
    ​
        if make_cbar:
            fig.subplots_adjust(right=0.8)
            cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
            fig.colorbar(im, cax=cbar_ax)
            cbar_ax.set_ylabel("mean_test_score", rotation=-90, va="bottom", fontsize=15)
    ​
    ​
    fig, axes = plt.subplots(ncols=2, sharey=True)
    ax1, ax2 = axes
    ​
    make_heatmap(ax1, gsh, is_sh=True)
    make_heatmap(ax2, gs, make_cbar=True)
    ​
    ax1.set_title("Successive Halving\ntime = {:.3f}s".format(gsh_time), fontsize=15)
    ax2.set_title("GridSearch\ntime = {:.3f}s".format(gs_time), fontsize=15)
    ​
    plt.show()

     

    2.4.4 HalvingRandomSearchCV 连续减半

    # HalvingRandomSearchCV 连续减半
    import pandas as pd
    from sklearn import datasets
    import matplotlib.pyplot as plt
    from scipy.stats import randint
    import numpy as np
    ​
    from sklearn.experimental import enable_halving_search_cv  # noqa
    from sklearn.model_selection import HalvingRandomSearchCV
    from sklearn.ensemble import RandomForestClassifier
    ​
    rng = np.random.RandomState(0)
    ​
    X, y = datasets.make_classification(n_samples=700, random_state=rng)
    ​
    clf = RandomForestClassifier(n_estimators=20, random_state=rng)
    ​
    param_dist = {
        "max_depth": [3, None],
        "max_features": randint(1, 11),
        "min_samples_split": randint(2, 11),
        "bootstrap": [True, False],
        "criterion": ["gini", "entropy"],
    }
    ​
    rsh = HalvingRandomSearchCV(
        estimator=clf, param_distributions=param_dist, factor=2, random_state=rng
    )
    rsh.fit(X, y)
    ​
    results = pd.DataFrame(rsh.cv_results_)
    results["params_str"] = results.params.apply(str)
    results.drop_duplicates(subset=("params_str", "iter"), inplace=True)
    mean_scores = results.pivot(
        index="iter", columns="params_str", values="mean_test_score"
    )
    ax = mean_scores.plot(legend=False, alpha=0.6)
    ​
    labels = [
        f"iter={i}\nn_samples={rsh.n_resources_[i]}\nn_candidates={rsh.n_candidates_[i]}"
        for i in range(rsh.n_iterations_)
    ]
    ​
    ax.set_xticks(range(rsh.n_iterations_))
    ax.set_xticklabels(labels, rotation=45, multialignment="left")
    ax.set_title("Scores of candidates over iterations")
    ax.set_ylabel("mean test score", fontsize=15)
    ax.set_xlabel("iterations", fontsize=15)
    plt.tight_layout()
    plt.show()

     

    2.4.5 ParameterGrid 参数网格

    用于生成参数组合的方法。

    # 2.4.5 ParameterGrid 参数网格
    # ParameterGrid(param_grid)
    from sklearn.model_selection import ParameterGrid
    param_grid = {'a': [1, 2], 'b': [True, False]}
    print(list(ParameterGrid(param_grid)) == (
       [{'a': 1, 'b': True}, {'a': 1, 'b': False},
        {'a': 2, 'b': True}, {'a': 2, 'b': False}]))
    ​
    grid = [{'kernel': ['linear']}, {'kernel': ['rbf'], 'gamma': [1, 10]}]
    ​
    print(list(ParameterGrid(grid)) == [{'kernel': 'linear'},
                                  {'kernel': 'rbf', 'gamma': 1},
                                  {'kernel': 'rbf', 'gamma': 10}])
    ​
    print(ParameterGrid(grid)[1] == {'kernel': 'rbf', 'gamma': 1})

    2..4.6 ParameterSampler 参数生成器

     

    # 2..4.6 ParameterSampler 参数生成器
    '''
     ParameterSampler(param_distributions, n_iter, *, random_state=None)
    '''from sklearn.model_selection import ParameterSampler
    from scipy.stats.distributions import expon
    import numpy as np
    rng = np.random.RandomState(666)
    param_grid = {'a':[1, 2], 'b': expon()}
    param_list = list(ParameterSampler(param_grid, n_iter=4,
                                       random_state=rng))
    rounded_list = [dict((k, round(v, 6)) for (k, v) in d.items())
                    for d in param_list]
    ​
    print(param_list) 
    print(rounded_list) 

    2.X 参考

    1. 【python sklearn 机器学习】sklearn.model_selection 介绍

    2. Python机器学习库sklearn.model_selection模块的几个方法参数

    2.x2 附件:全部api

    SPLITTER CLASSES 拆分器类 
    model_selection.KFold K折交叉验证器
    model_selection.GroupKFold 具有非重叠组的 K 折迭代器变体。
    model_selection.ShuffleSplit 随机排列交叉验证器
    model_selection.GroupShuffleSplit Shuffle-Group(s)-Out 交叉验证迭代器
    model_selection.LeaveOneOut 留一法交叉验证器
    model_selection.LeaveOneGroupOut 留一组交叉验证器
    model_selection.LeavePOut 留P法交叉验证器
    model_selection.LeavePGroupsOut 留P组交叉验证器
    model_selection.PredefinedSplit 预定义的拆分交叉验证器
    model_selection.RepeatedKFold 重复 K 折交叉验证器。
    model_selection.RepeatedStratifiedKFold 重复分层 K 折交叉验证器。
    model_selection.StratifiedKFold 分层 K 折交叉验证器。
    model_selection.StratifiedShuffleSplit 分层 ShuffleSplit 交叉验证器
    model_selection.StratifiedGroupKFold 具有非重叠组的分层 K-Folds 迭代器变体。
    model_selection.TimeSeriesSplit 时间序列交叉验证器
       
    Splitter Functions 拆分器功能  
    model_selection.check_cv 用于构建交叉验证器的输入检查器实用程序
    model_selection.train_test_split 将数组或矩阵拆分为随机训练和测试子集
       
    Hyper-parameter optimizers 超参数优化器  
    model_selection.GridSearchCV 对估计器的指定参数值进行详尽搜索。
    model_selection.HalvingGridSearchCV 使用连续减半搜索指定的参数值。
    model_selection.ParameterGrid 参数网格,每个参数具有离散数量的值。
    model_selection.ParameterSampler 从给定分布中采样的参数生成器。
    model_selection.RandomizedSearchCV 对超参数的随机搜索。
    model_selection.HalvingRandomSearchCV 对超参数的随机搜索。
       
    Model validation 模型验证  
    model_selection.cross_validate 通过交叉验证评估指标并记录拟合/得分时间。
    model_selection.cross_val_predict 为每个输入数据点生成交叉验证的估计
    model_selection.cross_val_score 通过交叉验证评估分数
    model_selection.learning_curve 学习曲线。
    model_selection.permutation_test_score 评估具有排列的交叉验证分数的重要性
    model_selection.validation_curve 验证曲线。
    当你深入了解,你就会发现世界如此广袤,而你对世界的了解则是如此浅薄,请永远保持谦卑的态度。
  • 相关阅读:
    浙江大学数据结构:02-线性结构3 Reversing Linked List (25分)
    浙江大学数据结构:02-线性结构2 一元多项式的乘法与加法运算 (20分)
    浙江大学数据结构:01-复杂度2 Maximum Subsequence Sum (25分)
    SQL事务--转载
    触发器--转载
    项目版本控制工具SVN介绍--转载
    项目版本控制工具VSS介绍--转载
    AE开发中ICircularArc接口的图形要素保存与形状简化--原创
    ArcGISEngine绘制椭圆--转载
    NPOI学习--转载
  • 原文地址:https://www.cnblogs.com/liwxmyself/p/15707571.html
Copyright © 2011-2022 走看看