zoukankan      html  css  js  c++  java
  • Spark 特征提取、转换和选择

    Spark(3) - Extracting, transforming, selecting features

    官方文档链接:https://spark.apache.org/docs/2.2.0/ml-features.html

    概述

    该章节包含基于特征的算法工作,下面是粗略的对算法分组:

    • 提取:从原始数据中提取特征;
    • 转换:缩放、转换、修改特征;
    • 选择:从大的特征集合中选择一个子集;
    • 局部敏感哈希:这一类的算法组合了其他算法在特征转换部分(LSH最根本的作用是处理海量高维数据的最近邻,也就是相似度问题,它使得相似度很高的数据以较高的概率映射为同一个hash值,而相似度很低的数据以极低的概率映射为同一个hash值,完成这个功能的函数,称之为LSH);

    目录:

    • 特征提取:
      • TF-IDF
      • Word2Vec
      • CountVectorizer
    • 特征转换:
      • Tokenizer
      • StopWordsRemover
      • n-gram
      • Binarizer
      • PCA
      • PolynomialExpansion
      • Discrete Cosine Transform
      • StringIndexer
      • IndexToString
      • OneHotEncoder
      • VectorIndexer
      • Interaction
      • Normalizer
      • StandardScaler
      • MinMaxScaler
      • MaxAbsScaler
      • Bucketizer
      • ElementwiseProduct
      • SQLTransformer
      • VectorAssembler
      • QuantileDiscretizer
      • Imputer
    • 特征选择:
      • VectorSlicer
      • RFormule
      • ChiSqSelector
    • 局部敏感哈希:
      • LSH Oprations:
        • Feature Transformation
        • Approximate Similarity Join
        • Approximate Nearest Neighbor Search
      • LSH Algorithms:
        • Bucketed Random Projection for Euclidean Distance
        • MinHash for Jaccard Distance

    特征提取

    TF-IDF

    TF-IDF是一种广泛用于文本挖掘中反应语料库中每一项对于文档的重要性的特征向量化方法;

    • TF:HashingTF和CountVectorizer都可以用于生成词项频率向量;
    • IDF:IDF是一个预测器,调用其fit方法后得到IDFModel,IDFModel将每个特征向量进行缩放,这样做的目的是降低词项在语料库中出现次数导致的权重;
    from pyspark.ml.feature import HashingTF, IDF, Tokenizer
    
    sentenceData = spark.createDataFrame([
        (0.0, "Hi I heard about Spark"),
        (0.0, "I wish Java could use case classes"),
        (1.0, "Logistic regression models are neat")
    ], ["label", "sentence"])
    
    tokenizer = Tokenizer(inputCol="sentence", outputCol="words")
    wordsData = tokenizer.transform(sentenceData)
    
    hashingTF = HashingTF(inputCol="words", outputCol="rawFeatures", numFeatures=20)
    featurizedData = hashingTF.transform(wordsData)
    # alternatively, CountVectorizer can also be used to get term frequency vectors
    
    idf = IDF(inputCol="rawFeatures", outputCol="features")
    idfModel = idf.fit(featurizedData)
    rescaledData = idfModel.transform(featurizedData)
    
    rescaledData.select("label", "features").show()
    

    Word2Vec

    Word2Vec是一个使用文档中的词序列的预测器,训练得到Word2VecModel,该模型将每个词映射到一个唯一的可变大小的向量上,Word2VecModel使用文档中所有词的平均值将文档转换成一个向量,这个向量可以作为特征用于预测、文档相似度计算等;

    from pyspark.ml.feature import Word2Vec
    
    # Input data: Each row is a bag of words from a sentence or document.
    documentDF = spark.createDataFrame([
        ("Hi I heard about Spark".split(" "), ),
        ("I wish Java could use case classes".split(" "), ),
        ("Logistic regression models are neat".split(" "), )
    ], ["text"])
    
    # Learn a mapping from words to Vectors.
    word2Vec = Word2Vec(vectorSize=3, minCount=0, inputCol="text", outputCol="result")
    model = word2Vec.fit(documentDF)
    
    result = model.transform(documentDF)
    for row in result.collect():
        text, vector = row
        print("Text: [%s] => 
    Vector: %s
    " % (", ".join(text), str(vector)))
    

    CountVectorizer

    CountVectorizer和CountVectorizerModel的目标是将文本文档集合转换为token出行次数的向量,当一个先验的词典不可用时,CountVectorizr可以作为一个预测器来提取词汇并生成CoutVectorizerModel,这个模型为文档生成基于词汇的稀疏表达式,这可以作为其他算法的输入,比如LDA;

    在Fitting过程中,CountVectorizer会选择语料库中词频最大的词汇量,一个可选的参数minDF通过指定文档中词在语料库中的最小出现次数来影响Fitting过程,另一个可选的二类切换参数控制输出向量,如果设置为True,那么所有非零counts都将被设置为1,这对于离散概率模型尤其有用;

    假设我们有下面这个DataFrame,两列为id和texts:

    id texts
    0 Array("a", "b", "c")
    1 Array("a", "b", "b", "c", "a")

    texts中的每一行都是一个元素为字符串的数组表示的文档,调用CountVectorizer的Fit方法得到一个含词汇(a,b,c)的模型,输出列“vector”格式如下:

    id texts vector
    0 Array("a", "b", "c") (3,[0,1,2],[1.0,1.0,1.0])
    1 Array("a", "b", "b", "c", "a") (3,[0,1,2],[2.0,2.0,1.0])
    from pyspark.ml.feature import CountVectorizer
    
    # Input data: Each row is a bag of words with a ID.
    df = spark.createDataFrame([
        (0, "a b c".split(" ")),
        (1, "a b b c a".split(" "))
    ], ["id", "words"])
    
    # fit a CountVectorizerModel from the corpus.
    cv = CountVectorizer(inputCol="words", outputCol="features", vocabSize=3, minDF=2.0)
    
    model = cv.fit(df)
    
    result = model.transform(df)
    result.show(truncate=False)
    

    特征转换

    Tokenizer

    Tokenization表示将文本转换分割为单词集合的过程,一个简单的Tokenizer提供了这个功能,下面例子展示如何将句子分割为单词序列;

    RegexTokenizer允许使用更多高级的基于正则表达式的Tokenization,默认情况下,参数pattern用于表达分隔符,或者用户可以设置参数gaps为false来表示pattern不是作为分隔符,此时pattern就是正则表达式的作用;

    from pyspark.ml.feature import Tokenizer, RegexTokenizer
    from pyspark.sql.functions import col, udf
    from pyspark.sql.types import IntegerType
    
    sentenceDataFrame = spark.createDataFrame([
        (0, "Hi I heard about Spark"),
        (1, "I wish Java could use case classes"),
        (2, "Logistic,regression,models,are,neat")
    ], ["id", "sentence"])
    
    tokenizer = Tokenizer(inputCol="sentence", outputCol="words")
    
    regexTokenizer = RegexTokenizer(inputCol="sentence", outputCol="words", pattern="\W")
    # alternatively, pattern="\w+", gaps(False)
    
    countTokens = udf(lambda words: len(words), IntegerType())
    
    tokenized = tokenizer.transform(sentenceDataFrame)
    tokenized.select("sentence", "words")
        .withColumn("tokens", countTokens(col("words"))).show(truncate=False)
    
    regexTokenized = regexTokenizer.transform(sentenceDataFrame)
    regexTokenized.select("sentence", "words") 
        .withColumn("tokens", countTokens(col("words"))).show(truncate=False)
    

    StopWordsRemover

    停用词指的是那些在输入中应该被去除的单词,因为停用词出现次数很多但是又不包含任意信息;

    StopWordsRemover将输入的字符串序列中所有的停用词丢弃,停用词列表可以通过参数stopWords指定同一种语言的默认停用词可以通过调用StopWordsRemover.loadDefaultStopWords来访问(可惜没有中文的停用词列表),bool型参数caseSensitive表示是否大小写敏感,默认是不敏感;

    假设我们有下列包含id和raw的DataFrame:

    id raw
    0 [I, saw, the, red, baloon]
    1 [Mary, had, a, little, lamb]

    对raw列应用StopWordsRemover可以得到过滤后的列:

    id raw filtered
    0 [I, saw, the, red, baloon] [saw, red, baloon]
    1 [Mary, had, a, little, lamb] [Mary, little, lamb]
    from pyspark.ml.feature import StopWordsRemover
    
    sentenceData = spark.createDataFrame([
        (0, ["I", "saw", "the", "red", "balloon"]),
        (1, ["Mary", "had", "a", "little", "lamb"])
    ], ["id", "raw"])
    
    remover = StopWordsRemover(inputCol="raw", outputCol="filtered")
    remover.transform(sentenceData).show(truncate=False)
    

    n-gram

    一个n-gram就是一个n tokens(一般就是单词)的序列,NGram类将输入特征转换成n-grams;

    NGram将字符串序列(比如Tokenizer的输出)作为输入,参数n用于指定每个n-gram中的项的个数;

    from pyspark.ml.feature import NGram
    
    wordDataFrame = spark.createDataFrame([
        (0, ["Hi", "I", "heard", "about", "Spark"]),
        (1, ["I", "wish", "Java", "could", "use", "case", "classes"]),
        (2, ["Logistic", "regression", "models", "are", "neat"])
    ], ["id", "words"])
    
    ngram = NGram(n=2, inputCol="words", outputCol="ngrams")
    
    ngramDataFrame = ngram.transform(wordDataFrame)
    ngramDataFrame.select("ngrams").show(truncate=False)
    

    Binarizer

    Binarization表示将数值型特征转换为0/1特征的过程;

    Binarizer使用常用的inputCol和outputCol参数,指定threshold用于二分数据,特征值大于阈值的将被设置为1,反之则是0,向量和双精度浮点型都可以作为inputCol;

    from pyspark.ml.feature import Binarizer
    
    continuousDataFrame = spark.createDataFrame([
        (0, 0.1),
        (1, 0.8),
        (2, 0.2)
    ], ["id", "feature"])
    
    binarizer = Binarizer(threshold=0.5, inputCol="feature", outputCol="binarized_feature")
    
    binarizedDataFrame = binarizer.transform(continuousDataFrame)
    
    print("Binarizer output with Threshold = %f" % binarizer.getThreshold())
    binarizedDataFrame.show()
    

    PCA

    PCA是一种使用正交变换将可能相关的变量值转换为线性不相关(即主成分)的统计程序,PCA类训练模型用于将向量映射到低维空间,下面例子演示了如何将5维特征向量映射到3维主成分;

    from pyspark.ml.feature import PCA
    from pyspark.ml.linalg import Vectors
    
    data = [(Vectors.sparse(5, [(1, 1.0), (3, 7.0)]),),
            (Vectors.dense([2.0, 0.0, 3.0, 4.0, 5.0]),),
            (Vectors.dense([4.0, 0.0, 0.0, 6.0, 7.0]),)]
    df = spark.createDataFrame(data, ["features"])
    
    pca = PCA(k=3, inputCol="features", outputCol="pcaFeatures")
    model = pca.fit(df)
    
    result = model.transform(df).select("pcaFeatures")
    result.show(truncate=False)
    

    PolynomialExpansion

    多项式展开是将特征展开到多项式空间的过程,这可以通过原始维度的n阶组合,PolynomailExpansion类提供了这一功能,下面例子展示如何将原始特征展开到一个3阶多项式空间;

    from pyspark.ml.feature import PolynomialExpansion
    from pyspark.ml.linalg import Vectors
    
    df = spark.createDataFrame([
        (Vectors.dense([2.0, 1.0]),),
        (Vectors.dense([0.0, 0.0]),),
        (Vectors.dense([3.0, -1.0]),)
    ], ["features"])
    
    polyExpansion = PolynomialExpansion(degree=3, inputCol="features", outputCol="polyFeatures")
    polyDF = polyExpansion.transform(df)
    
    polyDF.show(truncate=False)
    

    Discrete Cosine Tranform

    离散余弦转换将在时域的长度为N的真值序列转换到另一个在频域的长度为N的真值序列,DCT类提供了这一功能;

    from pyspark.ml.feature import DCT
    from pyspark.ml.linalg import Vectors
    
    df = spark.createDataFrame([
        (Vectors.dense([0.0, 1.0, -2.0, 3.0]),),
        (Vectors.dense([-1.0, 2.0, 4.0, -7.0]),),
        (Vectors.dense([14.0, -2.0, -5.0, 1.0]),)], ["features"])
    
    dct = DCT(inverse=False, inputCol="features", outputCol="featuresDCT")
    
    dctDf = dct.transform(df)
    
    dctDf.select("featuresDCT").show(truncate=False)
    

    StringIndexer

    StringIndexer将字符串标签编码为索引标签,实际就是将字符串与数字进行一一对应,不过这个的对应关系是字符串频率越高,对应数字越小,因此出现最多的将被映射为0,对于未见过的字符串标签,如果用户选择保留,那么它们将会被放入数字标签中,如果输入标签是数值型,会被强转为字符串再处理;

    假设我们有下面这个包含id和category的DataFrame:

    id category
    0 a
    1 b
    2 c
    3 a
    4 a
    5 c

    category是字符串列,包含3种标签:‘a’,‘b’,‘c’,应用StringIndexer到category得到categoryIndex:

    id category categoryIndex
    0 a 0.0
    1 b 2.0
    2 c 1.0
    3 a 0.0
    4 a 0.0
    5 c 1.0

    'a'映射到0,因为它出现次数最多,然后是‘c’,映射到1,‘b’映射到2;

    另外,有三种策略处理没见过的label:

    • 抛出异常,默认选择是这个;
    • 跳过包含未见过的label的行;
    • 将未见过的标签放入特别的额外的桶中,在索引数字标签;

    回到前面的例子,不同的是将上述构建的StringIndexer实例用于下面的DataFrame上,注意‘d’和‘e’是未见过的标签:

    id category
    0 a
    1 b
    2 c
    3 d
    4 e

    如果没有设置StringIndexer如何处理错误或者设置了‘error’,那么它会抛出异常,如果设置为‘skip’,会得到下述结果:

    id category categoryIndex
    0 a 0.0
    1 b 2.0
    2 c 1.0

    注意到含有‘d’和‘e’的行被跳过了;

    如果设置为‘keep’,那么会得到以下结果:

    id category categoryIndex
    0 a 0.0
    1 b 2.0
    2 c 1.0
    3 d 3.0
    4 e 3.0

    看到,未见过的标签被统一映射到一个单独的数字上,此处是‘3’;

    from pyspark.ml.feature import StringIndexer
    
    df = spark.createDataFrame(
        [(0, "a"), (1, "b"), (2, "c"), (3, "a"), (4, "a"), (5, "c")],
        ["id", "category"])
    
    indexer = StringIndexer(inputCol="category", outputCol="categoryIndex")
    indexed = indexer.fit(df).transform(df)
    indexed.show()
    

    IndexToString

    可以简单看作是StringIndexer的反向操作,通常使用场景也是与StringIndexer配套使用;

    基于StringIndexer的例子,假设我们有下述包含id和categoryIndex的DataFrame,注意此处的categoryIndex是StringIndexer转换得到的:

    id categoryIndex
    0 0.0
    1 2.0
    2 1.0
    3 0.0
    4 0.0
    5 1.0

    应用IndexToString到categoryIndex,输出originalCategory,我们可以取回我们的原始标签(这是基于列的元数据推断得到的):

    id categoryIndex originalCategory
    0 0.0 a
    1 2.0 b
    2 1.0 c
    3 0.0 a
    4 0.0 a
    5 1.0 c
    from pyspark.ml.feature import IndexToString, StringIndexer
    
    df = spark.createDataFrame(
        [(0, "a"), (1, "b"), (2, "c"), (3, "a"), (4, "a"), (5, "c")],
        ["id", "category"])
    
    indexer = StringIndexer(inputCol="category", outputCol="categoryIndex")
    model = indexer.fit(df)
    indexed = model.transform(df)
    
    print("Transformed string column '%s' to indexed column '%s'"
          % (indexer.getInputCol(), indexer.getOutputCol()))
    indexed.show()
    
    print("StringIndexer will store labels in output column metadata
    ")
    
    converter = IndexToString(inputCol="categoryIndex", outputCol="originalCategory")
    converted = converter.transform(indexed)
    
    print("Transformed indexed column '%s' back to original string column '%s' using "
          "labels in metadata" % (converter.getInputCol(), converter.getOutputCol()))
    converted.select("id", "categoryIndex", "originalCategory").show()
    

    OneHotEncoder

    One-Hot编码将标签列索引到二分向量上,这种编码使得那些期望输入为数值型特征的算法,比如逻辑回归,可以使用类别型特征;

    from pyspark.ml.feature import OneHotEncoder, StringIndexer
    
    df = spark.createDataFrame([
        (0, "a"),
        (1, "b"),
        (2, "c"),
        (3, "a"),
        (4, "a"),
        (5, "c")
    ], ["id", "category"])
    
    stringIndexer = StringIndexer(inputCol="category", outputCol="categoryIndex")
    model = stringIndexer.fit(df)
    indexed = model.transform(df)
    
    encoder = OneHotEncoder(inputCol="categoryIndex", outputCol="categoryVec")
    encoded = encoder.transform(indexed)
    encoded.show()
    

    VectorIndexer

    VectorIndexer帮助对类别特征进行索引处理,它可以同时自动判断那些特征是类别型,并将其映射到类别索引上,如下:

    • 接收类型为Vector的列,设置参数maxCategories;
    • 基于列的唯一值数量判断哪些列需要进行类别索引化,最多有maxCategories个特征被处理;
    • 每个特征索引从0开始;
    • 索引类别特征并转换原特征值为索引值;

    下面例子,读取一个含标签的数据集,使用VectorIndexer进行处理,转换类别特征为他们自身的索引,之后这个转换后的特征数据就可以直接送入类似DecisionTreeRegressor等算法中进行训练了:

    from pyspark.ml.feature import VectorIndexer
    
    data = spark.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")
    
    indexer = VectorIndexer(inputCol="features", outputCol="indexed", maxCategories=10)
    indexerModel = indexer.fit(data)
    
    categoricalFeatures = indexerModel.categoryMaps
    print("Chose %d categorical features: %s" %
          (len(categoricalFeatures), ", ".join(str(k) for k in categoricalFeatures.keys())))
    
    # Create new column "indexed" with categorical values transformed to indices
    indexedData = indexerModel.transform(data)
    indexedData.show()
    

    Interaction

    Interfaction是一个接收向量列或者两个值的列的转换器,输出一个单向量列,该列包含输入列的每个值所有组合的乘积;

    例如,如果你有2个向量列,每一个都是3维,那么你将得到一个9维(3*3的排列组合)的向量作为输出列;

    假设我们有下列包含vec1和vec2两列的DataFrame:

    id1 vec1 vec2
    1 [1.0,2.0,3.0] [8.0,4.0,5.0]
    2 [4.0,3.0,8.0] [7.0,9.0,8.0]
    3 [6.0,1.0,9.0] [2.0,3.0,6.0]
    4 [10.0,8.0,6.0] [9.0,4.0,5.0]
    5 [9.0,2.0,7.0] [10.0,7.0,3.0]
    6 [1.0,1.0,4.0] [2.0,8.0,4.0]

    对vec1和vec2应用Interaction后得到interactedCol作为输出列:

    id1 vec1 vec2 interactedCol
    1 [1.0,2.0,3.0] [8.0,4.0,5.0] [8.0,4.0,5.0,16.0,8.0,10.0,24.0,12.0,15.0]
    2 [4.0,3.0,8.0] [7.0,9.0,8.0] [56.0,72.0,64.0,42.0,54.0,48.0,112.0,144.0,128.0]
    3 [6.0,1.0,9.0] [2.0,3.0,6.0] [36.0,54.0,108.0,6.0,9.0,18.0,54.0,81.0,162.0]
    4 [10.0,8.0,6.0] [9.0,4.0,5.0] [360.0,160.0,200.0,288.0,128.0,160.0,216.0,96.0,120.0]
    5 [9.0,2.0,7.0] [10.0,7.0,3.0] [450.0,315.0,135.0,100.0,70.0,30.0,350.0,245.0,105.0]
    6 [1.0,1.0,4.0] [2.0,8.0,4.0] [12.0,48.0,24.0,12.0,48.0,24.0,48.0,192.0,96.0]
    import org.apache.spark.ml.feature.Interaction
    import org.apache.spark.ml.feature.VectorAssembler
    
    val df = spark.createDataFrame(Seq(
      (1, 1, 2, 3, 8, 4, 5),
      (2, 4, 3, 8, 7, 9, 8),
      (3, 6, 1, 9, 2, 3, 6),
      (4, 10, 8, 6, 9, 4, 5),
      (5, 9, 2, 7, 10, 7, 3),
      (6, 1, 1, 4, 2, 8, 4)
    )).toDF("id1", "id2", "id3", "id4", "id5", "id6", "id7")
    
    val assembler1 = new VectorAssembler().
      setInputCols(Array("id2", "id3", "id4")).
      setOutputCol("vec1")
    
    val assembled1 = assembler1.transform(df)
    
    val assembler2 = new VectorAssembler().
      setInputCols(Array("id5", "id6", "id7")).
      setOutputCol("vec2")
    
    val assembled2 = assembler2.transform(assembled1).select("id1", "vec1", "vec2")
    
    val interaction = new Interaction()
      .setInputCols(Array("id1", "vec1", "vec2"))
      .setOutputCol("interactedCol")
    
    val interacted = interaction.transform(assembled2)
    
    interacted.show(truncate = false)
    

    Normalizer

    Normalizer是一个转换Vector数据集的转换器,对数据进行正则化处理,正则化处理标准化数据,并提高学习算法的表现;

    from pyspark.ml.feature import Normalizer
    from pyspark.ml.linalg import Vectors
    
    dataFrame = spark.createDataFrame([
        (0, Vectors.dense([1.0, 0.5, -1.0]),),
        (1, Vectors.dense([2.0, 1.0, 1.0]),),
        (2, Vectors.dense([4.0, 10.0, 2.0]),)
    ], ["id", "features"])
    
    # Normalize each Vector using $L^1$ norm.
    normalizer = Normalizer(inputCol="features", outputCol="normFeatures", p=1.0)
    l1NormData = normalizer.transform(dataFrame)
    print("Normalized using L^1 norm")
    l1NormData.show()
    
    # Normalize each Vector using $L^infty$ norm.
    lInfNormData = normalizer.transform(dataFrame, {normalizer.p: float("inf")})
    print("Normalized using L^inf norm")
    lInfNormData.show()
    

    StandardScaler

    StandardScaler转换Vector数据集,正则化每个特征使其具备统一的标准差或者均值为0,可设置参数:

    • withStd,默认是True,将数据缩放到一致的标准差下;
    • withMean,默认是False,缩放前使用均值集中数据,会得到密集结果,如果应用在稀疏输入上要格外注意;

    StandardScaler是一个预测器,可以通过fit数据集得到StandardScalerModel,这可用于计算总结统计数据,这个模型可以转换数据集中的一个vector列,使其用于一致的标准差或者均值为0;

    注意:如果一个特征的标准差是0,那么该特征处理后返回的就是默认值0;

    from pyspark.ml.feature import StandardScaler
    
    dataFrame = spark.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")
    scaler = StandardScaler(inputCol="features", outputCol="scaledFeatures",
                            withStd=True, withMean=False)
    
    # Compute summary statistics by fitting the StandardScaler
    scalerModel = scaler.fit(dataFrame)
    
    # Normalize each feature to have unit standard deviation.
    scaledData = scalerModel.transform(dataFrame)
    scaledData.show()
    

    MinMaxScaler

    MinMaxScaler转换Vector数据集,重新缩放每个特征到一个指定范围,默认是0到1,参数如下:

    • min:默认0,指定范围下限;
    • max:默认1,指定范围上限;

    MinMaxScaler计算数据集上的总结统计,生成MinMaxScalerModel,这个模型可以将每个特征转换到给定的范围内;

    重新缩放特征值的方式如下:

    [egin{equation} Rescaled(e_i) = frac{e_i - E_{min}}{E_{max} - E_{min}} * (max - min) + min end{equation} ]

    注意:值为0也有可能被转换为非0值,转换的输出将是密集向量即便输入是稀疏向量;

    from pyspark.ml.feature import MinMaxScaler
    from pyspark.ml.linalg import Vectors
    
    dataFrame = spark.createDataFrame([
        (0, Vectors.dense([1.0, 0.1, -1.0]),),
        (1, Vectors.dense([2.0, 1.1, 1.0]),),
        (2, Vectors.dense([3.0, 10.1, 3.0]),)
    ], ["id", "features"])
    
    scaler = MinMaxScaler(inputCol="features", outputCol="scaledFeatures")
    
    # Compute summary statistics and generate MinMaxScalerModel
    scalerModel = scaler.fit(dataFrame)
    
    # rescale each feature to range [min, max].
    scaledData = scalerModel.transform(dataFrame)
    print("Features scaled to range: [%f, %f]" % (scaler.getMin(), scaler.getMax()))
    scaledData.select("features", "scaledFeatures").show()
    

    MaxAbsScaler

    MaxAbsScaler转换Vector的数据集,通过除以每个特征自身的最大绝对值将数值范围缩放到-1和1之间,这个操作不会移动或者集中数据(数据分布没变),也就不会损失任何稀疏性;

    MaxAbsScaler计算总结统计生成MaxAbsScalerModel,这个模型可以转换任何一个特征到-1和1之间;

    from pyspark.ml.feature import MaxAbsScaler
    from pyspark.ml.linalg import Vectors
    
    dataFrame = spark.createDataFrame([
        (0, Vectors.dense([1.0, 0.1, -8.0]),),
        (1, Vectors.dense([2.0, 1.0, -4.0]),),
        (2, Vectors.dense([4.0, 10.0, 8.0]),)
    ], ["id", "features"])
    
    scaler = MaxAbsScaler(inputCol="features", outputCol="scaledFeatures")
    
    # Compute summary statistics and generate MaxAbsScalerModel
    scalerModel = scaler.fit(dataFrame)
    
    # rescale each feature to range [-1, 1].
    scaledData = scalerModel.transform(dataFrame)
    
    scaledData.select("features", "scaledFeatures").show()
    

    Bucketizer

    分箱操作,Bucketizer将一个数值型特征转换程箱型特征,每个箱的间隔等都是用户设置的,参数:

    • splits:数值到箱的映射关系表,将会分为n+1个分割得到n个箱,每个箱定义为[x,y),即x到y之间,包含x,最后一个箱同时包含y,分割需要时单调递增的,正负无穷都必须明确的提供以覆盖所有数值,也就是说,在指定分割范围外的数值将被作为错误对待;

    注意:如果你不知道目标列的上下限,你需要添加正负无穷作为你分割的第一个和最后一个箱;

    注意:提供的分割顺序必须是单调递增的,s0 < s1 < s2.... < sn;

    from pyspark.ml.feature import Bucketizer
    
    splits = [-float("inf"), -0.5, 0.0, 0.5, float("inf")]
    
    data = [(-999.9,), (-0.5,), (-0.3,), (0.0,), (0.2,), (999.9,)]
    dataFrame = spark.createDataFrame(data, ["features"])
    
    bucketizer = Bucketizer(splits=splits, inputCol="features", outputCol="bucketedFeatures")
    
    # Transform original data into its bucket index.
    bucketedData = bucketizer.transform(dataFrame)
    
    print("Bucketizer output with %d buckets" % (len(bucketizer.getSplits())-1))
    bucketedData.show()
    

    ElementwiseProduct

    ElementwiseProduct将每个输入向量乘以对应的提供的”权重“向量,使用element-wise倍增,换句话说,它使用标乘处理数据集中的每一列,公式如下:

    [egin{pmatrix} v_1 \ vdots \ v_N end{pmatrix} circ egin{pmatrix} w_1 \ vdots \ w_N end{pmatrix} = egin{pmatrix} v_1 w_1 \ vdots \ v_N w_N end{pmatrix} ]

    from pyspark.ml.feature import ElementwiseProduct
    from pyspark.ml.linalg import Vectors
    
    # Create some vector data; also works for sparse vectors
    data = [(Vectors.dense([1.0, 2.0, 3.0]),), (Vectors.dense([4.0, 5.0, 6.0]),)]
    df = spark.createDataFrame(data, ["vector"])
    transformer = ElementwiseProduct(scalingVec=Vectors.dense([0.0, 1.0, 2.0]),
                                     inputCol="vector", outputCol="transformedVector")
    # Batch transform the vectors to create new column:
    transformer.transform(df).show()
    

    SQLTransformer

    SQLTransformer实现了SQL表达式定义的数据转换方法,目前我们只支持的SQL语句类似”SELECT ... FROM __THIS__ ... WHERE __THIS__“,用户还可以使用Spark SQL内建函数或者UDF来操作选中的列,例如SQLTransformer支持下列用法:

    • SELECT a, a+b AS a_b FROM __THIS__
    • SELECT a, SQRT(B) AS b_sqrt FROM __THIS__ WHERE a > 5
    • SELECT a, b, SUM(c) AS c_sum FROM __THIS__ GROUP BY a, b

    假设我们有下列DataFrame:

    id v1 v2
    0 1.0 3.0
    2 2.0 5.0

    应用”SELECT *, (v1 + v2) AS v3, (v1 * v2) AS v4 FROM __THIS__“结果如下:

    id v1 v2 v3 v4
    0 1.0 3.0 4.0 3.0
    2 2.0 5.0 7.0 10.0
    from pyspark.ml.feature import SQLTransformer
    
    df = spark.createDataFrame([
        (0, 1.0, 3.0),
        (2, 2.0, 5.0)
    ], ["id", "v1", "v2"])
    sqlTrans = SQLTransformer(
        statement="SELECT *, (v1 + v2) AS v3, (v1 * v2) AS v4 FROM __THIS__")
    sqlTrans.transform(df).show()
    

    VectorAssembler

    VectorAssembler将N个列组合转成一个vector列的转换器,一般用户对原始特征的组合或者对其他转换器输出的组合,对于模型训练来说,通常都需要先对原始的各种类别的,包括数值、bool、vector等特征进行VectorAssembler组合后再送入模型训练;

    假设有下列数据:

    id hour mobile userFeatures clicked
    0 18 1.0 [0.0, 10.0, 0.5] 1.0

    上述数据包含整型、浮点型以及vector,同时id和clicked是不需要组合的,应用VectorAssembler结果如下:

    id hour mobile userFeatures clicked features
    0 18 1.0 [0.0, 10.0, 0.5] 1.0 [18.0, 1.0, 0.0, 10.0, 0.5]

    可以看到,原始特征中的vector也被展开了,这就很方便;

    from pyspark.ml.linalg import Vectors
    from pyspark.ml.feature import VectorAssembler
    
    dataset = spark.createDataFrame(
        [(0, 18, 1.0, Vectors.dense([0.0, 10.0, 0.5]), 1.0)],
        ["id", "hour", "mobile", "userFeatures", "clicked"])
    
    assembler = VectorAssembler(
        inputCols=["hour", "mobile", "userFeatures"],
        outputCol="features")
    
    output = assembler.transform(dataset)
    print("Assembled columns 'hour', 'mobile', 'userFeatures' to vector column 'features'")
    output.select("features", "clicked").show(truncate=False)
    

    QuantileDiscretizer

    QuantileDiscretizer(分位数离散)将数值型特征转换为类别型特征(类别号为分位数对应),通过numBuckets设置桶的数量,也就是分为多少段,比如设置为100,那就是百分位,可能最终桶数小于这个设置的值,这是因为原数据中的所有可能的数值数量不足导致的;

    NaN值:NaN值在QuantileDiscretizer的Fitting期间会被移除,该过程会得到一个Bucketizer模型来预测,在转换期间,Bucketizer如果在数据集中遇到NaN,那么会抛出一个错误,但是用户可以选择是保留还是移除NaN值,通过色湖之handleInvalid参数,如果用户选择保留,那么这些NaN值会被放入一个特殊的额外增加的桶中;

    算法:每个桶的范围的选择是通过近似算法,近似精度可以通过参数relativeError控制,如果设置为0,那么就会计算准确的分位数(注意这个计算是非常占用计算资源的),桶的上下限为正负无穷,覆盖所有实数;

    假设我们有下列DataFrame:

    id hour
    0 18.0
    1 19.0
    2 8.0
    3 5.0
    4 2.2

    hour是一个双精度类型的数值列,我们想要将其转换为类别型,设置numBuckets为3,也就是放入3个桶中,得到下列DataFrame:

    id hour result
    0 18.0 2.0
    1 19.0 2.0
    2 8.0 1.0
    3 5.0 1.0
    4 2.2 0.0
    from pyspark.ml.feature import QuantileDiscretizer
    
    data = [(0, 18.0), (1, 19.0), (2, 8.0), (3, 5.0), (4, 2.2)]
    df = spark.createDataFrame(data, ["id", "hour"])
    
    discretizer = QuantileDiscretizer(numBuckets=3, inputCol="hour", outputCol="result")
    
    result = discretizer.fit(df).transform(df)
    result.show()
    

    Imputer

    Imputer用于对数据集中的缺失值进行填充,可以通过均值或者中位数等对指定未知的缺失值填充,输入特征需要是Float或者Double类型,当前Imputer不支持类别特征和对于包含类别特征的列可能会出现错误数值;

    注意:所有输入特征中的null值都被看做是缺失值,因此也会被填充;

    假设我们有下列DataFrame:

    a b
    1.0 Double.NaN
    2.0 Double.NaN
    Double.NaN 3.0
    4.0 4.0
    5.0 5.0

    在这个例子中,Imputer会替换所有Double.NaN为对应列的均值,a列均值为3,b列均值为4,转换后,a和b中的NaN被3和4替换得到新列:

    a b out_a out_b
    1.0 Double.NaN 1.0 4.0
    2.0 Double.NaN 2.0 4.0
    Double.NaN 3.0 3.0 3.0
    4.0 4.0 4.0 4.0
    5.0 5.0 5.0 5.0
    from pyspark.ml.feature import Imputer
    
    df = spark.createDataFrame([
        (1.0, float("nan")),
        (2.0, float("nan")),
        (float("nan"), 3.0),
        (4.0, 4.0),
        (5.0, 5.0)
    ], ["a", "b"])
    
    imputer = Imputer(inputCols=["a", "b"], outputCols=["out_a", "out_b"])
    model = imputer.fit(df)
    
    model.transform(df).show()
    

    特征选择

    VectorSlicer

    VectorSlicer是一个转换器,接收特征向量,输出含有原特征向量子集的新的特征向量,这对于对向量列做特征提取很有用;

    VectorSlicer接收包含指定索引的向量列,输出新的向量列,新的向量列中的元素是通过这些索引指定选择的,有两种指定索引的方式:

    • 通过setIndices()方法以整数方式指定下标;
    • 通过setNames()方法以字符串方式指定索引,这要求向量列有一AttributeGroup将每个Attribute与名字匹配上;

    通过整数和字符串指定都是可以的,此外还可以同时指定整合和字符串,最少一个特征必须被选中,不允许指定重复列,因此不会出现重复列,注意,如果指定了一个不存在的字符串列会抛出异常;

    输出向量会把特征按照整数指定的顺序排列,然后才是按照字符串指定的顺序;

    假设我们有包含userFeatures列的DataFrame:

    userFeatures
    [0.0, 10.0, 0.5]

    userFeatures是一个包含3个用户特征的向量列,假设userFeatures的第一列都是0,因此我们希望可以移除它,仅保留其余两列,通过setIndices(1,2)的结果如下:

    userFeatures features
    [0.0, 10.0, 0.5] [10.0, 0.5]

    假设userFeatures中3列对应名字为["f1","f2","f3"],那么我们同样可以通过setNames("f2","f3")实现一样的效果:

    userFeatures features
    [0.0, 10.0, 0.5] [10.0, 0.5]
    ["f1", "f2", "f3"] ["f2", "f3"]
    from pyspark.ml.feature import VectorSlicer
    from pyspark.ml.linalg import Vectors
    from pyspark.sql.types import Row
    
    df = spark.createDataFrame([
        Row(userFeatures=Vectors.sparse(3, {0: -2.0, 1: 2.3})),
        Row(userFeatures=Vectors.dense([-2.0, 2.3, 0.0]))])
    
    slicer = VectorSlicer(inputCol="userFeatures", outputCol="features", indices=[1])
    
    output = slicer.transform(df)
    
    output.select("userFeatures", "features").show()
    

    RFormula

    RFormula通过R模型公式选择列,当前我们支持有限的R操作的子集,包括”~“、”.“、”:“、”+“、”-“:

    • ~分割目标和项,类似公式中的等号;
    • +连接多个项,”+ 0“表示移除截距;
    • -移除一项,”- 1“表示移除截距;
    • :相互作用(数值型做乘法、类别型做二分);
    • .除了目标列的所有列;

    假设a和b是两个列,我们可以使用下述简单公式来演示RFormula的功能:

    • y ~ a + b:表示模型 y~w0 + w1*a + w2*b,w0是截距,w1和w2是系数;
    • y ~ a + b + a:b -1:表示模型 y~w1*a + w2*b + w3*a*b,w1、w2和w3都是系数;

    RFormula生成一个特征向量列和一个双精度浮点或者字符串型的标签列,类似R中的公式用于线性回归一样,字符串输入列会被one-hot编码,数值型列会被强转为双精度浮点,如果标签列是字符串,那么会首先被StringIndexer转为double,如果DataFrame中不存在标签列,输出标签列会被公式中的指定返回变量所创建;

    假设我们有一个包含id、country、hour、clicked的DataFrame,如下:

    id country hour clicked
    7 "US" 18 1.0
    8 "CA" 12 0.0
    9 "NZ" 15 0.0

    如果我们使用公式为”clicked ~ country + hour“的RFormula,这意味着我们希望基于country和hour来预测clicked,转换后我们会得到如下DataFrame:

    id country hour clicked features label
    7 "US" 18 1.0 [0.0, 0.0, 18.0] 1.0
    8 "CA" 12 0.0 [0.0, 1.0, 12.0] 0.0
    9 "NZ" 15 0.0 [1.0, 0.0, 15.0] 0.0
    from pyspark.ml.feature import RFormula
    
    dataset = spark.createDataFrame(
        [(7, "US", 18, 1.0),
         (8, "CA", 12, 0.0),
         (9, "NZ", 15, 0.0)],
        ["id", "country", "hour", "clicked"])
    
    formula = RFormula(
        formula="clicked ~ country + hour",
        featuresCol="features",
        labelCol="label")
    
    output = formula.fit(dataset).transform(dataset)
    output.select("features", "label").show()
    

    ChiSqSelector

    ChiSqSelector用于卡方特征选择,它作用于类别特征标签数据,ChiSqSelector使用独立卡方检验来决定哪些特征被选中,它支持5种选择方法:

    • numTopFeatures:指定返回卡方测试中的TopN个特征;
    • percentile:返回卡方测试中的多少比例的Top特征;
    • fpr:返回所有p值小于阈值的特征,它控制选择的false positive比例;
    • fdr:返回false descovery rate小于阈值的特征;
    • fwe:返回所有p值小于阈值的特征,阈值为1/numFeatures;

    默认使用numTopFeatures,N指定为50;

    假设我们有包含id、features、clicked的DataFrame作为我们目标来预测:

    id features clicked
    7 [0.0, 0.0, 18.0, 1.0] 1.0
    8 [0.0, 1.0, 12.0, 0.0] 0.0
    9 [1.0, 0.0, 15.0, 0.1] 0.0

    如果我们使用ChiSqSelector,指定numTopFeatures=1,根据标签列clicked计算得到features中的最后一列是最有用的特征:

    id features clicked selectedFeatures
    7 [0.0, 0.0, 18.0, 1.0] 1.0 [1.0]
    8 [0.0, 1.0, 12.0, 0.0] 0.0 [0.0]
    9 [1.0, 0.0, 15.0, 0.1] 0.0 [0.1]
    from pyspark.ml.feature import ChiSqSelector
    from pyspark.ml.linalg import Vectors
    
    df = spark.createDataFrame([
        (7, Vectors.dense([0.0, 0.0, 18.0, 1.0]), 1.0,),
        (8, Vectors.dense([0.0, 1.0, 12.0, 0.0]), 0.0,),
        (9, Vectors.dense([1.0, 0.0, 15.0, 0.1]), 0.0,)], ["id", "features", "clicked"])
    
    selector = ChiSqSelector(numTopFeatures=1, featuresCol="features",
                             outputCol="selectedFeatures", labelCol="clicked")
    
    result = selector.fit(df).transform(df)
    
    print("ChiSqSelector output with top %d features selected" % selector.getNumTopFeatures())
    result.show()
    

    局部敏感哈希

    PS:这篇LSH讲的挺好的,可以参考下;

    LSH是哈希技术中很重要的一类,通常用于海量数据的聚类、近似最近邻搜索、异常检测等;

    通常的做法是使用LSH family函数将数据点哈希到桶中,相似的点大概率落入一样的桶,不相似的点落入不同的桶中;

    在矩阵空间(M,d)中,M是数据集合,d是作用在M上的距离函数,LSH family函数h需要满足下列属性:

    [forall p, q in M,\ d(p,q) leq r1 Rightarrow Pr(h(p)=h(q)) geq p1\ d(p,q) geq r2 Rightarrow Pr(h(p)=h(q)) leq p2 ]

    这个LSH family叫做(r1,r2,p1,p2)-sensitive;

    在Spark中,不同的LSH family通过分离的类实现(比如MinHash),每个类都提供用于特征转换、近似相似连接、近似最近邻的API;

    LSH操作

    我们选择了LSH能被使用的主要的操作类型,每个Fitted的LSH模型都有方法负责每个操作;

    特征转换

    特征转换是一个基本功能,将一个hash列作为新列添加到数据集中,这对于降维很有用,用户可以通过inputCol和outputCol指定输入输出列;

    LSH也支持多个LSH哈希表,用户可以通过numHuashTables指定哈希表个数(这属于增强LSH),这也可以用于近似相似连接和近似最近邻的OR-amplification,提高哈希表的个数可以提高准确率,同时也会提高运行时间和通信成本;

    outputCol的类型是Seq[Vector],数组的维度等于numHashTables,向量的维度目前设置为1,在未来,我们会实现AND-amplification,那样用户就可以指定向量的维度;

    近似相似连接

    近似相似连接使用两个数据集,返回近似的距离小于用户定义的阈值的行对(row,row),近似相似连接支持连接两个不同的数据集,也支持数据集与自身的连接,自身连接会生成一些重复对;

    近似相似连接允许转换后和未转换的数据集作为输入,如果输入是未转换的,它将被自动转换,这种情况下,哈希signature作为outputCol被创建;

    在连接后的数据集中,原始数据集可以在datasetA和datasetB中被查询,一个距离列会增加到输出数据集中,它包含每一对的真实距离;

    近似最近邻搜索

    近似最近邻搜索使用数据集(特征向量集合)和目标行(一个特征向量),它近似的返回指定数量的与目标行最接近的行;

    近似最近邻搜索同样支持转换后和未转换的数据集作为输入,如果输入未转换,那么会自动转换,这种情况下,哈希signature作为outputCol被创建;

    一个用于展示每个输出行与目标行之间距离的列会被添加到输出数据集中;

    注意:当哈希桶中没有足够候选数据点时,近似最近邻搜索会返回少于指定的个数的行;

    LSH算法

    LSH算法通常是一一对应的,即一个距离算法(比如欧氏距离、cos距离)对应一个LSH算法(即Hash函数);

    Bucketed Random Projection - 欧氏距离

    Bucketed Random Projection是针对欧氏距离的LSH族算法,欧氏距离定义如下:

    [d(mathbf{x}, mathbf{y}) = sqrt{sum_i (x_i - y_i)^2} ]

    LSH family将特征向量集x映射到一个随机单元向量v,将映射结果分到哈希桶中:

    [h(mathbf{x}) = Biglfloor frac{mathbf{x} cdot mathbf{v}}{r} Big floor ]

    r是用户定义的桶的长度,桶的长度可以用于控制哈希桶的平均大小,一个大的桶长度提高了特征被分到同一个桶中的概率(提高了true positives和false positives的数量);

    Bucketed Random Projection接收任意向量集作为输入特征集,sparse和dense向量都支持;

    from pyspark.ml.feature import BucketedRandomProjectionLSH
    from pyspark.ml.linalg import Vectors
    from pyspark.sql.functions import col
    
    dataA = [(0, Vectors.dense([1.0, 1.0]),),
             (1, Vectors.dense([1.0, -1.0]),),
             (2, Vectors.dense([-1.0, -1.0]),),
             (3, Vectors.dense([-1.0, 1.0]),)]
    dfA = spark.createDataFrame(dataA, ["id", "features"])
    
    dataB = [(4, Vectors.dense([1.0, 0.0]),),
             (5, Vectors.dense([-1.0, 0.0]),),
             (6, Vectors.dense([0.0, 1.0]),),
             (7, Vectors.dense([0.0, -1.0]),)]
    dfB = spark.createDataFrame(dataB, ["id", "features"])
    
    key = Vectors.dense([1.0, 0.0])
    
    brp = BucketedRandomProjectionLSH(inputCol="features", outputCol="hashes", bucketLength=2.0,
                                      numHashTables=3)
    model = brp.fit(dfA)
    
    # Feature Transformation
    print("The hashed dataset where hashed values are stored in the column 'hashes':")
    model.transform(dfA).show()
    
    # Compute the locality sensitive hashes for the input rows, then perform approximate
    # similarity join.
    # We could avoid computing hashes by passing in the already-transformed dataset, e.g.
    # `model.approxSimilarityJoin(transformedA, transformedB, 1.5)`
    print("Approximately joining dfA and dfB on Euclidean distance smaller than 1.5:")
    model.approxSimilarityJoin(dfA, dfB, 1.5, distCol="EuclideanDistance")
        .select(col("datasetA.id").alias("idA"),
                col("datasetB.id").alias("idB"),
                col("EuclideanDistance")).show()
    
    # Compute the locality sensitive hashes for the input rows, then perform approximate nearest
    # neighbor search.
    # We could avoid computing hashes by passing in the already-transformed dataset, e.g.
    # `model.approxNearestNeighbors(transformedA, key, 2)`
    print("Approximately searching dfA for 2 nearest neighbors of the key:")
    model.approxNearestNeighbors(dfA, key, 2).show()
    
    MinHash - 杰卡德距离

    MinHash是一个针对杰卡德距离的使用自然数作为输入特征集的LSH family,杰卡德距离的定义是两个集合的交集和并集的基数:

    [d(mathbf{A}, mathbf{B}) = 1 - frac{|mathbf{A} cap mathbf{B}|}{|mathbf{A} cup mathbf{B}|} ]

    MinHash对集合中每个元素应用一个随机哈希函数g,选取所有哈希值中最小的:

    [h(mathbf{A}) = min_{a in mathbf{A}}(g(a)) ]

    MinHash的输入集是二分向量集,向量索引表示元素自身和向量中的非零值,sparse和dense向量都支持,处于效率考虑推荐使用sparse向量集,例如Vectors.sparse(10, Array[(2,1.0),(3,1.0),(5,1.0)])表示空间中有10个元素,集合包括元素2,3,5,所有非零值被看作二分值中的”1“;

    from pyspark.ml.feature import MinHashLSH
    from pyspark.ml.linalg import Vectors
    from pyspark.sql.functions import col
    
    dataA = [(0, Vectors.sparse(6, [0, 1, 2], [1.0, 1.0, 1.0]),),
             (1, Vectors.sparse(6, [2, 3, 4], [1.0, 1.0, 1.0]),),
             (2, Vectors.sparse(6, [0, 2, 4], [1.0, 1.0, 1.0]),)]
    dfA = spark.createDataFrame(dataA, ["id", "features"])
    
    dataB = [(3, Vectors.sparse(6, [1, 3, 5], [1.0, 1.0, 1.0]),),
             (4, Vectors.sparse(6, [2, 3, 5], [1.0, 1.0, 1.0]),),
             (5, Vectors.sparse(6, [1, 2, 4], [1.0, 1.0, 1.0]),)]
    dfB = spark.createDataFrame(dataB, ["id", "features"])
    
    key = Vectors.sparse(6, [1, 3], [1.0, 1.0])
    
    mh = MinHashLSH(inputCol="features", outputCol="hashes", numHashTables=5)
    model = mh.fit(dfA)
    
    # Feature Transformation
    print("The hashed dataset where hashed values are stored in the column 'hashes':")
    model.transform(dfA).show()
    
    # Compute the locality sensitive hashes for the input rows, then perform approximate
    # similarity join.
    # We could avoid computing hashes by passing in the already-transformed dataset, e.g.
    # `model.approxSimilarityJoin(transformedA, transformedB, 0.6)`
    print("Approximately joining dfA and dfB on distance smaller than 0.6:")
    model.approxSimilarityJoin(dfA, dfB, 0.6, distCol="JaccardDistance")
        .select(col("datasetA.id").alias("idA"),
                col("datasetB.id").alias("idB"),
                col("JaccardDistance")).show()
    
    # Compute the locality sensitive hashes for the input rows, then perform approximate nearest
    # neighbor search.
    # We could avoid computing hashes by passing in the already-transformed dataset, e.g.
    # `model.approxNearestNeighbors(transformedA, key, 2)`
    # It may return less than 2 rows when not enough approximate near-neighbor candidates are
    # found.
    print("Approximately searching dfA for 2 nearest neighbors of the key:")
    model.approxNearestNeighbors(dfA, key, 2).show()
    

    最后

    大家可以到我的Github上看看有没有其他需要的东西,目前主要是自己做的机器学习项目、Python各种脚本工具、有意思的小项目以及Follow的大佬、Fork的项目等:
    https://github.com/NemoHoHaloAi

  • 相关阅读:
    编写JS代码的“use strict”严格模式及代码压缩知识
    开发网站要从用户的角度出发!
    你好,世界
    JavaScript的几种函数的结构形式
    JavaScript功能检测技术和函数构造
    android打造万能的适配器
    C语言第二次博客作业分支结构
    C语言第三次博客作业单层循环结构
    C语言第一次博客作业——输入输出格式
    C语言第四次博客作业嵌套循环
  • 原文地址:https://www.cnblogs.com/helongBlog/p/13729343.html
Copyright © 2011-2022 走看看