一、scikit-learn库中的kNN算法
- scikit-learn库中,所有机器学习算法都是以面向对象的形式进行包装的;
- 所有scikit-learn库中机器学习算法的使用过程:调用、实例化、fit、预测;
1)使用scikit-learn库中的kNN算法解决分来问题:
- 代码实现过程:
import numpy as np import matplotlib.pyplot as plt raw_data_x = [[3.3935, 2.3312], [3.1101, 1.7815], [1.3438, 3.3684], [3.5823, 4.6792], [2.2804, 2.8670], [7.4234, 4.6965], [5.7451, 3.5340], [9.1722, 2.5111], [7.7928, 3.4241], [7.9398, 0.7916]] raw_data_y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1] X_train = np.array(raw_data_x) y_train = np.array(raw_data_y) x = np.array([8.0936, 3.3657]).reshape(1, -1) # 1)调用 # 从KNeighborsClassifier类中调用kNN算法 from sklearn.neighbors import KNeighborsClassifier # 2)实例化 # 创建一个KNeighborsClassifier相应的实例 # n_neighbors为kNN中的k值 KNN_classifier = KNeighborsClassifier(n_neighbors = 6) # 3)fit过程 # 对实例对象做拟合过程,返回机器学习对象自身,也就是训练的模型 # 对scikit-learn库中每一个机器学习算法的使用,都要先进行拟合 # fit的过程,传入训练数据集(特征值X_train、样本标签向量y_train) KNN_classifier.fit(X_train, y_train) # 4)预测 # 使用模型进行预测,返回一个array,array中的每一个数据表示预测对象的输出结果 # 预测的对象必须是一个矩阵,一个矩阵中包含多个新样本 KNN_classifier.predict(x)
- 代码实现过程中的主义事项:
- 对scikit-learn库中每一个机器学习算法的使用,都要先进行拟合;
- 拟合的过程,传入训练数据集(特征值X_train、样本标签向量y_train);
- 预测的对象必须是一个矩阵,一个矩阵中包含多个新样本;
二、将自己所写的kNN算法封装成scikit-learn库中的kNN算法一样的模式
- 封装算法:
import numpy as np from math import sqrt from collections import Counter class KNNClassifier: def __int__(self, k): """初始化kNN分类器""" assert k >= 1, "k must be walid" self.k = k """变量前加_,表示该变量为类私有,其它类不能随便操作""" self._X_train = None self._y_train = None def fit(self, X_train, y_train): """根据训练集X_train和y_train训练kNN分类器""" assert X_train.shape[0] == y_train.shape[0], "the size of X_train must be equal to the size of y_train" assert self.k <= X_train.shape[0], "the size of X_train must be at least k." self._X_train = X_train self._y_train = y_train """ 为了和scikit-learn库的规则一样,此处一般返回模型本身, 可使封装好的算法与scikit-learn中其它方法更好结合 """ return self def predict(self, X_predict): """给定待预测数据集X_predict,返回表示X_predict的结果向量""" assert self._X_train is not None and self._y_train is not None, "must fit before predict!" assert X_predict.shape[1] == self._X_train.shape[1], "the feature number of X_predict must be equal to X_train" y_predict = [self._predict(x) for x in X_predict] return np.array(y_predict) def _predict(self, x): """给定单个待预测数据,返回x的预测结果""" assert x.shape[0] == self._X_train.shape[1], "the feature number of x must be equal to X_train" distances = [sqrt(np.sum((x - x_train) ** 2)) for x_train in self._X_train] nearest = np.argsort(distances) topK_y = [self._y_train[i] for i in nearest[:self.k]] votes = Counter(topK_y) return votes.most_common(1)[0][0] def __repr__(self): """kNN算法的显示名称""" return "KNN(k = %d)" % self.k
- 测试算法:调用、实例化、fit、预测;(操作过程与scikit-learn中的算法应用一样)
import numpy as np import matplotlib.pyplot as plt raw_data_x = [[3.3935, 2.3312], [3.1101, 1.7815], [1.3438, 3.3684], [3.5823, 4.6792], [2.2804, 2.8670], [7.4234, 4.6965], [5.7451, 3.5340], [9.1722, 2.5111], [7.7928, 3.4241], [7.9398, 0.7916]] raw_data_y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1] X_train = np.array(raw_data_x) y_train = np.array(raw_data_y) x = np.array([8.0936, 3.3657]).reshape(1, -1) # 1)导入kNN.py模块 %run kNN.py # 2)初始化 knn_clf = KNNClassifier(k=6) # 3)fit knn_clf.fit(X_train, y_train) # 4)预测 y_predict = knn_clf.predict(X_predict) print(y_predict)
- scikit-learn库内部的底层实现更加复杂,因为kNN算法在预测的过程中非常耗时(也是kNN算法的缺点);
- 字Jupyter NoteBook中运行py文件:%run + dir_path,如%run E:/pythonwj/ALG/matries.py