zoukankan      html  css  js  c++  java
  • scikit-learn包学习笔记1

    dataset

    在scikit-learn包自带的数据集,R包也自带数据集iris鸢尾花数据集,做训练集。特征较少。

    from sklearn import datasets
    
    # Import necessary modules
    from sklearn import datasets
    import matplotlib.pyplot as plt
    
    # Load the digits dataset: digits
    digits = datasets.load_digits()
    
    # Print the keys and DESCR of the dataset
    print(digits.keys())
    # DESCR:对于数据集的描述
    print(digits.DESCR)
    
    # Print the shape of the images and data keys
    print(digits.images.shape)
    # shape:输出矩阵或则数组的维度
    print(digits.data.shape)
    
    # Display digit 1010
    plt.imshow(digits.images[1010], cmap=plt.cm.gray_r, interpolation='nearest')
    plt.show()
    

    KNN(K-近邻算法)

    • 二分类器
      离哪个近,就归那类,还是距离度量,就是用最近的邻居来代表自己,如果你的邻居是个2b,那在knn里面你也是

    有写好的API可以直接用,然后调参。
    记得基本knn的貌似误差很大,这里可以用智能算法优化,可以想一下啊,我记得是有paper的,看下思路

    KNeighborsClassifier(n_neighbors=5, weights=’uniform’, algorithm=’auto’, leaf_size=30,p=2, metric=’minkowski’, metric_params=None, n_jobs=1, **kwargs)
    

    基本参数含义

    jianshu

    • n_neighbors : int, optional (default = 5) Number of neighbors to use by default for kneighbors queries…
      选择最近邻居的数目,选择要适中,否则会出现过拟合或者欠拟合的现象

    • weights : str or callable, optional (default = ‘uniform’)weight function used in prediction. Possible values::

      • ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally.
      • ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence
        than neighbors which are further away.
      • [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.
    • algorithm : {‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, optional

    • Algorithm used to compute the nearest neighbors:

      • ‘ball_tree’ will use BallTree
      • ‘kd_tree’ will use KDTree
      • ‘brute’ will use a brute-force search.
      • ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method.
        auto是用户自定义
        Note: fitting on sparse input will override the setting of this parameter, using brute force.
    • leaf_size : int, optional (default = 30)Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.

    • p : integer, optional (default = 2)Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
      这个就是选择不同的距离度量方式,不过常见的欧式距离比较多啊

    • metric : string or callable, default ‘minkowski’the distance metric to use for the tree. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of the DistanceMetric class for a list of available metrics.

    • metric_params : dict, optional (default = None)Additional keyword arguments for the metric function.

    • n_jobs : int or None, optional (default=None)The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Doesn’t affect fit method.
      衡量一个模型的表现能力

    • 使用预测精度来测量模型的拟合效果

    • 在拟合模型的时候,需要先对样本进行训练集和测试集的分类,一般75%的样本做训练集,剩下的做测试集

    # Import necessary modules
    # 这几个模块导入的方式我写错了
    from sklearn.neighbors import KNeighborsClassifier 
    from sklearn.model_selection import train_test_split
    
    # Create feature and target arrays
    X = digits.data
    y = digits.target
    
    # Split into training and test set
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42, stratify=y)
    
    # Create a k-NN classifier with 7 neighbors: knn
    knn = KNeighborsClassifier(n_neighbors=7)
    
    # Fit the classifier to the training data
    knn.fit(X_train, y_train)
    
    # Print the accuracy
    print(knn.score(X_test, y_test))
    
    <script.py> output:
        0.9833333333333333
    

    Overfitting and underfitting

    欠拟合与过拟合
    可以参考之前的笔记
    https://www.cnblogs.com/gaowenxingxing/p/12234179.html

  • 相关阅读:
    希尔排序之C++实现(初级版)
    CF9D How many trees?
    IOI2015 boxes纪念品盒
    CSP-S 2019图论总结
    数据生成器
    Special-Judge模板
    CF293B Distinct Paths
    浅谈几种常见的剪枝方式
    CF620E New Year Tree
    浅谈DFS序
  • 原文地址:https://www.cnblogs.com/gaowenxingxing/p/12289091.html
Copyright © 2011-2022 走看看