一、关机后服务重新启动
1. 启动hadoop服务
sbin/hadoop-daemon.sh start namenode sbin/hadoop-daemon.sh start datanode sbin/yarn-daemon.sh start resourcemanager sbin/yarn-daemon.sh start nodemanager sbin/mr-jobhistory-daemon.sh start historyserver sbin/hadoop-daemon.sh start secondarynamenode
sbin/httpfs.sh start
2. 启动hive服务
nohup hive --service metastore > ~/hive_metastore.start.log 2>&1 & nohup hive --service hiveserver2 > ~/hiveserver2.start.log 2>&1 &
3. 启动zookeeper服务
bin/zkServer.sh start
4. 启动kafka服务
nohup bin/kafka-server-start.sh config/server.properties >~/kafka-start.log 2>&1 & nohup bin/kafka-server-start.sh config/server1.properties >~/kafka-server1-start.log 2>&1 & nohup bin/kafka-server-start.sh config/server2.properties >~/kafka-server2-start.log 2>&1 &
5. 启动oozie服务
bin/oozied.sh start
6. 启动hbase服务
bin/start-hbase.sh
bin/hbase-daemon.sh start thrift
7. 启动hue服务
nohup build/env/bin/supervisor >~/hue-start.log 2>&1 &
8. 启动mongodb服务
nohup bin/mongod --config bin/mongodb.conf >~/mongodb-start.log 2>&1 &
9. 启动storm
nohup bin/storm nimbus >~/storm-nimbus-start.log 2>&1 & nohup bin/storm supervisor >~/storm-supervisor-start.log 2>&1 & nohup bin/storm ui >~/storm-ui-start.log 2>&1 & nohup bin/storm logviewer >~/storm-logviewer-start.log 2>&1 &
10. 启动spark
sbin/start-master.sh
sbin/start-slaves.sh
99. 大数据UI服务
http://beifeng-hadoop-02:50070/dfshealth.html#tab-overview http://beifeng-hadoop-02:8088/cluster http://beifeng-hadoop-02:19888/ http://beifeng-hadoop-02:50090/status.html http://beifeng-hadoop-02:11000/oozie/
http://beifeng-hadoop-02:60010/master-status
http://beifeng-hadoop-02:8888/
http://beifeng-hadoop-02:8081/
二、大数据相关服务及端口
服务名 | 默认端口 |
namenode | 9000 |
datanode | 50010/50075/50020 |
secondarynamenode | 50090 |
resourcemanager | 8088 |
nodemanager | |
historyserver | |
zookeeper | 2181 |
oozie | 11000 |
mongodb | 27017 |
mongodb http | 28017 |
storm ui | 8081 |
spark ui | 4040 |
三、 环境检查命令
jps
ps -ef | grep processname
netstat -tlnup | grep port
lsof –i : port
netstat –nltp | grep pid
pkill -9 java