zoukankan      html  css  js  c++  java
  • 流处理环境搭建

    1 spark背景介绍

     spark组成

    Spark组成(BDAS):全称伯克利数据分析栈,通过大规模集成算法、机器、人之间展现大数据应用的一个平台。也是处理大数据、云计算、通信的技术解决方案。
    
    它的主要组件有:
    
    SparkCore:将分布式数据抽象为弹性分布式数据集(RDD),实现了应用任务调度、RPC、序列化和压缩,并为运行在其上的上层组件提供API。
    
    SparkSQL:Spark Sql 是Spark来操作结构化数据的程序包,可以让我使用SQL语句的方式来查询数据,Spark支持 多种数据源,包含Hive表,parquest以及JSON等内容。
    
    SparkStreaming: 是Spark提供的实时数据进行流式计算的组件。
    
    MLlib:提供常用机器学习算法的实现库。
    
    GraphX:提供一个分布式图计算框架,能高效进行图计算。
    
    BlinkDB:用于在海量数据上进行交互式SQL的近似查询引擎。
    
    Tachyon:以内存为中心高容错的的分布式文件系统。
    jdk版本
    java version "1.8.0_144" Java(TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

    hadoop 版本
    Hadoop 2.6.5
    Subversion https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997
    Compiled by sjlee on 2016-10-02T23:43Z
    Compiled with protoc 2.5.0
    From source with checksum f05c9fa095a395faa9db9f7ba5d754
    This command was run using /utxt/hadoop-2.6.5/share/hadoop/common/hadoop-common-2.6.5.jar

    scala 版本
    Scala code runner version 2.10.5 -- Copyright 2002-2013, LAMP/EPFL

    SPARK 版本
    spark-2.4.0-bin-hadoop2.6

    2 环境变量

    hadoop setting
    export HADOOP_HOME=/utxt/hadoop-2.6.5
    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
    export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
    
    
    #SPARK setting
    export SPARK_HOME=/utxt/spark-2.4.0-bin-hadoop2.6
    export PATH=$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
    
    #SCALA setting
    export SCALA_HOME=/utxt/scala-2.10.5
    export PATH=$SCALA_HOME/bin:$PATH
    
    
    #java settings
    #export PATH
    export JAVA_HOME=/u01/app/software/jdk1.8.0_144
    export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
    export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

    3 SPARK 配置

    在/utxt/spark-2.4.0-bin-hadoop2.6/conf
    spark-env.sh 添加如下几行
    export SCALA_HOME
    =/utxt/scala-2.10.5 export SPARK_MASTER_IP=gc64 export SPARK_WORKER_MEMORY=1500m export JAVA_HOME=/u01/app/software/jdk1.8.0_144

    slaves 添加一行
    gc64

    4 启动SPARK

    start-master.sh

    在浏览器输入
    http://gc64:8080/

    启动worker
    start-slaves.sh spark://gc64:7077

    启动spark-shell
    spark-shell --master spark://gc64:7077

    5 运行例子测试

    spark_shell(先启动hadoop)
    val file=sc.textFile("hdfs://gc64:9000/user/sms/test/test.txt")
    val rdd = file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
    rdd.collect()
    rdd.foreach(println)

    jar包测试
    spark-submit --class JavaWordCount --executor-memory 1G --total-executor-cores 2 /utxt/test/spark-0.0.1.jar hdfs://gc64:9000/user/sms/test/test.txt

    java wordcount代码

    /*
     * Licensed to the Apache Software Foundation (ASF) under one or more
     * contributor license agreements.  See the NOTICE file distributed with
     * this work for additional information regarding copyright ownership.
     * The ASF licenses this file to You under the Apache License, Version 2.0
     * (the "License"); you may not use this file except in compliance with
     * the License.  You may obtain a copy of the License at
     *
     *    http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    import scala.Tuple2;
    
    import org.apache.spark.api.java.JavaPairRDD;
    import org.apache.spark.api.java.JavaRDD;
    import org.apache.spark.sql.SparkSession;
    
    import java.util.Arrays;
    import java.util.List;
    import java.util.regex.Pattern;
    
    public final class JavaWordCount {
        private static final Pattern SPACE = Pattern.compile(" ");
    
        public static void main(String[] args) throws Exception {
    
            if (args.length < 1) {
                System.err.println("Usage: JavaWordCount <file>");
                System.exit(1);
            }
    
            SparkSession spark = SparkSession
                    .builder()
                    .appName("JavaWordCount")
                    .getOrCreate();
    
            JavaRDD<String> lines = spark.read().textFile(args[0]).javaRDD();
            JavaRDD<String> words = lines.flatMap(s -> Arrays.asList(SPACE.split(s)).iterator());
            JavaPairRDD<String, Integer> ones = words.mapToPair(s -> new Tuple2<>(s, 1));
            JavaPairRDD<String, Integer> counts = ones.reduceByKey((i1, i2) -> i1 + i2);
            List<Tuple2<String, Integer>> output = counts.collect();
    
            for (Tuple2<?,?> tuple : output) {
                System.out.println(tuple._1() + ": " + tuple._2());
            }
            spark.stop();
        }
    }

    Scala 逻辑回归 代码

    /*
     * Licensed to the Apache Software Foundation (ASF) under one or more
     * contributor license agreements.  See the NOTICE file distributed with
     * this work for additional information regarding copyright ownership.
     * The ASF licenses this file to You under the Apache License, Version 2.0
     * (the "License"); you may not use this file except in compliance with
     * the License.  You may obtain a copy of the License at
     *
     *    http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    // scalastyle:off println
    package org.apache.spark.examples
    
    import java.util.Random
    
    import scala.math.exp
    
    import breeze.linalg.{DenseVector, Vector}
    
    import org.apache.spark.sql.SparkSession
    
    /**
     * Logistic regression based classification.
     * Usage: SparkLR [partitions]
     *
     * This is an example implementation for learning how to use Spark. For more conventional use,
     * please refer to org.apache.spark.ml.classification.LogisticRegression.
     */
    object SparkLR {
      val N = 10000  // Number of data points
      val D = 10   // Number of dimensions
      val R = 0.7  // Scaling factor
      val ITERATIONS = 5
      val rand = new Random(42)
    
      case class DataPoint(x: Vector[Double], y: Double)
    
      def generateData: Array[DataPoint] = {
        def generatePoint(i: Int): DataPoint = {
          val y = if (i % 2 == 0) -1 else 1
          val x = DenseVector.fill(D) {rand.nextGaussian + y * R}
          DataPoint(x, y)
        }
        Array.tabulate(N)(generatePoint)
      }
    
      def showWarning() {
        System.err.println(
          """WARN: This is a naive implementation of Logistic Regression and is given as an example!
            |Please use org.apache.spark.ml.classification.LogisticRegression
            |for more conventional use.
          """.stripMargin)
      }
    
      def main(args: Array[String]) {
    
        showWarning()
    
        val spark = SparkSession
          .builder
          .appName("SparkLR")
          .getOrCreate()
    
        val numSlices = if (args.length > 0) args(0).toInt else 2
        val points = spark.sparkContext.parallelize(generateData, numSlices).cache()
    
        // Initialize w to a random value
        val w = DenseVector.fill(D) {2 * rand.nextDouble - 1}
        println(s"Initial w: $w")
    
        for (i <- 1 to ITERATIONS) {
          println(s"On iteration $i")
          val gradient = points.map { p =>
            p.x * (1 / (1 + exp(-p.y * (w.dot(p.x)))) - 1) * p.y
          }.reduce(_ + _)
          w -= gradient
        }
    
        println(s"Final w: $w")
    
        spark.stop()
      }
    }

    其它例子请参考 spark-2.4.0-bin-hadoop2.6/examples/src/main

     6 问题汇集

    Failed to initialize mapreduce.shuffle
    yarn.nodemanager.aux-services项的默认值是“mapreduce.shuffle”
    解决方案
    将yarn.nodemanager.aux-services项的值改为“mapreduce_shuffle”。
    start-dfs.sh
    start-yarn.sh
    mr-jobhistory-daemon.sh  start historyserver
    start-master.sh
    start-slaves.sh spark://gc64:7077  
    start-history-server.sh 

    7 参考资料

    [1]  搭建Spark的单机版集群  https://www.cnblogs.com/ivictor/p/5135792.html
    [2]  http://spark.apache.org/

    [3]  https://blog.csdn.net/snail_bing/article/details/82905539

  • 相关阅读:
    Mysql的联合索引-最左匹配的隐藏规则
    C#读取word文档内容
    安装完office后 在组件服务里DCOM配置中找不到的解决方案
    .NET Web应用程序发布后无法读取Word文档的解决方法
    web程序读取word报异常:COM 类工厂中 CLSID 为 {000209FF-0000-0000-C000-000000000046} 的组件失败,原因是出现以下错误: 80070005 拒绝访问。最新解决方案
    C# 读取txt格式文件内容
    idea 社区版开发 springbook及问题
    Visualvm jvisualvm1.8详情使用
    VSCODE 打造完美java开发环境(新)
    如何将sdk的jar包安装到本地maven库中
  • 原文地址:https://www.cnblogs.com/hdu-2010/p/10432165.html
Copyright © 2011-2022 走看看