zoukankan      html  css  js  c++  java
  • sklearn.model_selection Part 1: Splitter Classes

    1. GroupKFold(_BaseKFold)

    主要参数:

    n_splits : int, default=3

    在GroupKFold.split(X[, y, groups])中会调用下面的方法

    def _iter_test_indices(self, X, y, groups):
        if groups is None:
            raise ValueError("The 'groups' parameter should not be None.")
        groups = check_array(groups, ensure_2d=False, dtype=None)
    
        unique_groups, groups = np.unique(groups, return_inverse=True)  # unique_groups[groups] 可以重建原来的groups
        n_groups = len(unique_groups)
    
        if self.n_splits > n_groups:
            raise ValueError("Cannot have number of splits n_splits=%d greater"
                             " than the number of groups: %d."
                             % (self.n_splits, n_groups))
    
        # Weight groups by their number of occurrences
        n_samples_per_group = np.bincount(groups)  # 每组中的样本数
    
        # Distribute the most frequent groups first
        indices = np.argsort(n_samples_per_group)[::-1]  # 按从每组样本数由多到少的顺序排列每组的索引
        n_samples_per_group = n_samples_per_group[indices]  # 每组的样本数从大到小的排列,n_samples_per_group 的索引不再有意义而被 indices 代替
    
        # Total weight of each fold
        n_samples_per_fold = np.zeros(self.n_splits)
    
        # Mapping from group index to fold index
        group_to_fold = np.zeros(len(unique_groups))
    
        # Distribute samples by adding the largest weight to the lightest fold
        # largest weight 就是当前最大的一组样本数,lightest fold 就是所有fold中所含样本最少的那一折
        for group_index, weight in enumerate(n_samples_per_group):
            lightest_fold = np.argmin(n_samples_per_fold)
            n_samples_per_fold[lightest_fold] += weight
            group_to_fold[indices[group_index]] = lightest_fold  # 这里就是组数要比折数多的原因,因为一组必须全部放到一折里面,一折可包含多组
    
        indices = group_to_fold[groups]
    
        for f in range(self.n_splits):
            yield np.where(indices == f)[0]  # 返回每一折的 test index
    

    总结

    GroupKFold是没有随机性参数的,也就是样本的groups确定后,每一折包含那些样本也是确定的。算法的文字描述如下:

    1. 每组按照组内样本数的多少进行排序
    2. 以组为单位对所有组做一次遍历
    3. 将当前组内所有样本放入当前分配的所有折中所含样本数最少的折中

    使用时注意组数要大于折数,同一组的样本一定被分到同一折中。

    2. GroupShuffleSplit(ShuffleSplit)

    主要参数:

    n_splits : int (default 5) 划分样本训练集和测试集的次数,不同于折数

    train_size/test_size : 训练集或测试集的样本比例或数目

    random_state

    在GroupShuffleSplit.split(X[, y, groups])方法中:

    def _iter_indices(self, X, y, groups):
        if groups is None:
            raise ValueError("The 'groups' parameter should not be None.")
        groups = check_array(groups, ensure_2d=False, dtype=None)
        classes, group_indices = np.unique(groups, return_inverse=True)
        for group_train, group_test in super(
                GroupShuffleSplit, self)._iter_indices(X=classes): # 对组进行ShuffleSplit,group_train包含了作为训练集的组号,group_test类同
            # these are the indices of classes in the partition
            # invert them into data indices
    
            # np.in1d 返回group_indices中的元素是否在group_train中,True False的数组
            # np.flatnonzero() 返回数组中不为0的元素的索引,实际就是选为train或者test的样本的index
            train = np.flatnonzero(np.in1d(group_indices, group_train))
            test = np.flatnonzero(np.in1d(group_indices, group_test))
    
            yield train, test
    

    然后其中调用ShuffleSplit的同名方法:

    # ShuffleSplit的方法
    def _iter_indices(self, X, y=None, groups=None):
        n_samples = _num_samples(X) # 返回 X 中样本数目
        n_train, n_test = _validate_shuffle_split(n_samples,  # 对传入参数做数据检查,返回训练集和测试集的样本数
                                                  self.test_size,
                                                  self.train_size)
        rng = check_random_state(self.random_state)  # 对random_state做数据检查,返回随机种子 np.random.RandomState()
        for i in range(self.n_splits):
            # random partition
            permutation = rng.permutation(n_samples) # 随机打乱数据
            ind_test = permutation[:n_test]
            ind_train = permutation[n_test:(n_test + n_train)]
            yield ind_train, ind_test  # 返回每一折的训练集的index和测试集的index
    

    总结

    GroupShuffleSplit是有随机性参数random_state的,其随机性来自于其父类ShuffleSplit。该算法的核心就是对组编号做ShuffleSplit,这也是其继承该类的原因,然后所有落在train中的组的所有样本组成训练集,其他组成测试集。和GroupKFold类似,一组的数组要么只出现在train中,要么只出现test中,不可同时出现在二者之中。

    3. KFold(_BaseKFold)

    注意

    参数shuffle的默认值是False,而我们一般要设置为True。当shuffle设置为True时,random_state才会被用到,这时,如果random_state如果不设置,每次结果会不一样,只有给每次设置random_state同一个值,shuffle的结果才是相同的。所以shuffle决定是否引入随机性,random_state只不过是让随机性可以重现。

    4. LeaveOneGroupOut(BaseCrossValidator)

        def _iter_test_masks(self, X, y, groups):
            if groups is None:
                raise ValueError("The 'groups' parameter should not be None.")
            # We make a copy of groups to avoid side-effects during iteration
            groups = check_array(groups, copy=True, ensure_2d=False, dtype=None)
            unique_groups = np.unique(groups)
            if len(unique_groups) <= 1:
                raise ValueError(
                    "The groups parameter contains fewer than 2 unique groups "
                    "(%s). LeaveOneGroupOut expects at least 2." % unique_groups)
            for i in unique_groups:
                yield groups == i # 返回作为测试集的index
    

    总结

    一开始对留一法不是很清楚,其实留一法就是交叉验证方法的极端情形,当交叉验证的折数和数据集的个数相等时就是留一法。理解了留一法后上面的源码就是非常清晰简洁的。一句话概括该算法就是以组为单位做留一法。

    5. LeavePGroupsOut(BaseCrossValidator)

    理解前面的就很简单,就是留出P个Group做测试集。

    6. LeaveOneOut(BaseCrossValidator)

    7. LeavePOneOut(BaseCrossValidator)

    8. PredefinedSplit(BaseCrossValidator)

    def __init__(self, test_fold):
        self.test_fold = np.array(test_fold, dtype=np.int)
        self.test_fold = column_or_1d(self.test_fold)  # 将shape是(n_samples,1)的数组拉成(n_samples,)的数组
        self.unique_folds = np.unique(self.test_fold)
        self.unique_folds = self.unique_folds[self.unique_folds != -1] # test_fold 中所有-1的样本要放到train中,也就是test_fold中不为-1的值的种类数就是split的数目。
    
    
    def split(self, X=None, y=None, groups=None):
        ind = np.arange(len(self.test_fold))
        for test_index in self._iter_test_masks():
            train_index = ind[np.logical_not(test_index)]
            test_index = ind[test_index]
            yield train_index, test_index
    
    
    def _iter_test_masks(self):
        """Generates boolean masks corresponding to test sets."""
        for f in self.unique_folds:
            test_index = np.where(self.test_fold == f)[0]
            test_mask = np.zeros(len(self.test_fold), dtype=np.bool)
            test_mask[test_index] = True
            yield test_mask
    

    上面的源码是清晰明了的,下面举例子说明:

    PredefinedSplit需要的参数只有一个那就是test_fold, test_fold的size要和数据集的size相同,test_fold中元素的值为-1,表示这个样本要放到训练集中,test_fold中具有相同值的元素对应的样本要放到同一个test_set中。比如 test_fold = [1, 1, 1, -1, -1, -1, 2, 2, 2, 2]表示或做两个split,第一次split中第4(首个index是1)到第10个样本做train,第1个到第3个做test;第二次split中第1到第7个样本做train,第8个到第10个样本做test 。

    9. RepeatedKFold(_RepeatedSplits)

    用不同的随机化重复KFold若干次,内部代码在每次KFold时会把shuffle设置为True。

    10. RepeatedStratifiedKFold(_RepeatedSplits)

    11. ShuffleSplit(BaseShuffleSplit)

    12. StratifiedKFold(_BaseKFold)

    注意

    split(X, y, [groups])中y是必须的参数而不是可选的。

    13. StratifiedShuffleSplit(BaseShuffleSplit)

    14. TimeSeriesSplit(_BaseKFold)

    def __init__(self, n_splits=3, max_train_size=None):
        super(TimeSeriesSplit, self).__init__(n_splits,
                                              shuffle=False,
                                              random_state=None)
        self.max_train_size = max_train_size  # 训练集的最大样本数
    
    def split(self, X, y=None, groups=None):
        X, y, groups = indexable(X, y, groups)
        n_samples = _num_samples(X)
        n_splits = self.n_splits
        n_folds = n_splits + 1
        if n_folds > n_samples: # 折数不能大于样本数,也即是参数n_splits不能大于样本数减1
            raise ValueError(
                ("Cannot have number of folds ={0} greater"
                 " than the number of samples: {1}.").format(n_folds,
                                                             n_samples))
        indices = np.arange(n_samples)
        test_size = (n_samples // n_folds)
        test_starts = range(test_size + n_samples % n_folds,
                            n_samples, test_size)
        for test_start in test_starts:
            if self.max_train_size and self.max_train_size < test_start:
                yield (indices[test_start - self.max_train_size:test_start],
                       indices[test_start:test_start + test_size])
            else:
                yield (indices[:test_start],
                       indices[test_start:test_start + test_size])
    

    使用方法总结

    • KFold类:n_splits既是折数(确定train和test的size),也是重复次数。
    • ShuffleSplit类:n_splits是重复次数,需要额外参数train_size/test_size来确定train和test被划分的大小。
    • LeaveOneOut类:不需要参数,LeaveOnePOut需要参数p。
    # template
    spliter = Spliter(...)
    for i, trn_idx,test_idx in enumerate(spliter.split(X,y)):
        ....
    
  • 相关阅读:
    疯子坐飞机问题
    从打开浏览器访问网址中间发生了什么(三握四挥
    Http和Https的区别
    【11】分治,旋转数组最小数字
    【12】(难&精)【DFS】矩阵中的路径
    map的几种分类
    System.setProperties idea命令注入
    centos配置静态ip
    java 异常收集
    window10 开机启动项 添加
  • 原文地址:https://www.cnblogs.com/ZeroTensor/p/10277119.html
Copyright © 2011-2022 走看看