zoukankan      html  css  js  c++  java
  • Hadoop集群指令 Hue 操作Oozie 指令 aws操作指令

    连接aws集群命令

    C:Users ui.li1>aws configure
    AWS Access Key ID [None]: **************************
    AWS Secret Access Key [None]: *********************
    Default region name [None]: cn-north-1
    Default output format [None]:

    aws s3 rm    s3://p-awsbj......      --recursive

    ls cp ...

    ###############################################    HUE   ########################################################

    设置workflow
    1. 点击 最右上角 ..., 记住 Workspace , 改变成 rui-test;
    2. 返回文件系统, 找到该Workspace, 再rename一次;
    3. 新建项目, 选择这个workspace 进入lib, 点击上传, 上传后选择该jar;

    配置:
    lib/tsp-etl-1.0-SNAPSHOT.jar
    org.apache.spark.examples.SparkPi
    ${hour}
    --driver-memory 1560M --num-executors 2 --executor-cores 1 --executor-memory 1024M

    yarn
    cluster
    appName

    oozie.action.sharelib.for.spark
    spark2

    以及配置邮件;

    ###################################################     Hadoop     ####################################################################

    node1 node2 node3
    nn1 nn2 dn3
    dn1 dn2 nm3
    rm1 rm2 zk3
    nm1 nm2 mysql
    zk1 zk2
    hivestat hivserv hivemeta

    zkServer.sh start

    主节点启动: start-dfs.sh
    #主节点启动: yarn-daemon.sh start resourcemanager
    主节点启动: start-yarn.sh
    stop-yarn.sh stop-dfs.sh


    hive --service metastore > /home/hadoop/hive.meta &
    hive --service hiveserver2 > /home/hadoop/hive.log &


    #hadoop fs -mkdir -p /user/spark/libs/
    #hadoop fs -put /data/software/spark-2.1.1/jars/* /user/spark/libs/
    hadoop fs -mkdir -p /tmp/spark/logs/


    start-master.sh
    start-slaves.sh


    zkCli.sh
    rm -rf /data/software/spark-2.1.1/conf/
    scp -r /data/software/spark-2.1.1/conf/ hadoop@app-002:/data/software/spark-2.1.1/


    Yarn运行日志:
    /tmp/logs/hadoop/logs

    提交任务做测试:

    hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar pi 1 3

    spark-shell --master yarn --deploy-mode client

    spark-submit --class org.apache.spark.examples.SparkPi
    --master yarn
    --deploy-mode cluster
    --driver-memory 1000m
    --executor-memory 1000m
    --executor-cores 1
    /data/software/spark-2.1.1/examples/jars/spark-examples_2.11-2.1.1.jar
    3

    debug:

    nohup java -jar sent-mail.jar > log.txt &

    查看端口是否占用:netstat -ntulp |grep 8020
    查看一个服务有多少端口:ps -ef |grep mysqld

    rm -rf /data/software/hadoop-2.7.3/logs/*

    单独启动各个组件, 查看bug产生原因。
    hadoop-daemon.sh start namenode
    hadoop-daemon.sh start datanode

    jps -ml
    kill -9

    shell-脚本:

     MySQL配置权限:

    CREATE USER 'so_ro'@'%' IDENTIFIED BY '123456';
    SELECT HOST,USER FROM mysql.user
    DELETE FROM mysql.user WHERE `user` = 'so_ro'

    GRANT SELECT ON bi_result.* TO so_ro@'%' ;
    FLUSH PRIVILEGES;







  • 相关阅读:
    jQuery-File-Upload文件上传
    JavaScript探秘:强大的原型和原型链
    JavaScript 开发进阶:理解 JavaScript 作用域和作用域链
    前端开发面试题及答案
    JSP页面
    XMLHTTP.readyState的五种状态
    HTTP: The Protocol Every Web Developer Must Know
    W3C-Web Service
    H5教程
    PHP 页面编码声明方法(header或meta)
  • 原文地址:https://www.cnblogs.com/ruili07/p/10563386.html
Copyright © 2011-2022 走看看