zoukankan      html  css  js  c++  java
  • Spark Hadoop Free 安装遇到的问题

    运行 ./sbin/start-master.sh :

    
    
    1. SparkCommand:/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -cp /home/server/spark/conf/:/home/server/spark/jars/*:/home/server/hadoop/etc/hadoop/:/home/server/hadoop/share/hadoop/common/lib/:/home/server/hadoop/share/hadoop/common/:/home/server/hadoop/share/hadoop/mapreduce/:/home/server/hadoop/share/hadoop/mapreduce/lib/:/home/server/hadoop/share/hadoop/yarn/:/home/server/hadoop/share/hadoop/yarn/lib/ -Xmx1g org.apache.spark.deploy.master.Master --host ThinkPad-W550s-Lab --port 7077 --webui-port 8080
    2. ========================================
    3. Error: A JNI error has occurred, please check your installation and try again
    4. Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/Logger
    5. at java.lang.Class.getDeclaredMethods0(Native Method)
    6. at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
    7. at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
    8. at java.lang.Class.getMethod0(Class.java:3018)
    9. at java.lang.Class.getMethod(Class.java:1784)
    10. at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
    11. at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
    12. Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger
    13. at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    14. at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    15. at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    16. at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    17. ... 7 more​​

    参考Spark官网:http://spark.apache.org/docs/latest/hadoop-provided.html

    Using Spark's "Hadoop Free" Build

    Spark uses Hadoop client libraries for HDFS and YARN. Starting in version Spark 1.4, the project packages “Hadoop free” builds that lets you more easily connect a single Spark binary to any Hadoop version. To use these builds, you need to modify SPARK_DIST_CLASSPATH to include Hadoop’s package jars. The most convenient place to do this is by adding an entry in conf/spark-env.sh.

    This page describes how to connect Spark to Hadoop for different types of distributions.

    Apache Hadoop

    For Apache distributions, you can use Hadoop’s ‘classpath’ command. For instance:

    
    
    1. ### in conf/spark-env.sh ###
    2.  
    3. # If 'hadoop' binary is on your PATH
    4. export SPARK_DIST_CLASSPATH=$(hadoop classpath)
    5.  
    6. # With explicit path to 'hadoop' binary
    7. export SPARK_DIST_CLASSPATH=$(/path/to/hadoop/bin/hadoop classpath)
    8.  
    9. # Passing a Hadoop configuration directory
    10. export SPARK_DIST_CLASSPATH=$(hadoop --config /path/to/configs classpath)​

    最终在spark-env.sh文件添加如下配置:

    
    
    1. export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/hadoop-2.7.3/bin/hadoop classpath)

    启动运行,成功!

  • 相关阅读:
    nodejs
    jsp路径问题之base
    WordPress固定链接修改后访问文章页面404
    IntelliJ IDEA使用教程 (总目录篇)
    SQL SELECT DISTINCT 语句
    数据库的内连接、外连接(左外连接、右外连接、全外连接)以及交叉连接(转)
    SQL之group by 和 having
    SQL之group by
    通过端口 1433 连接到主机 localhost 的 TCP/IP 连接失败。错误:“Connection refused: connect。
    java.lang.NoClassDefFoundError: Could not initialize class com.demo.jdbc.utils.MyJdbcUtils
  • 原文地址:https://www.cnblogs.com/yangcx666/p/8723802.html
Copyright © 2011-2022 走看看