1.机器环境
2.配置HA
2.1 修改hdfs-site.xml
2.2 设置core-site.xml
3.配置手动HA
3.1 关闭YARN、HDFS
3.2 启动HDFS HA
4.配置自动HA
4.1 关闭集群
4.2 修改配置文件
4.3 启动HA
4.4 测试自动切换
前言
上一节学习了HDFS HA的原理,本节来做实验
1.机器环境
| 主机名 | IP | 角色 |
|---|---|---|
| hadoop1 | 172.18.0.11 | NN1 ZK RM |
| hadoop2 | 172.18.0.12 | NN2 ZK RM JOBHISTORY |
| hadoop3 | 172.18.0.13 | DN ZK ND |
| hadoop4 | 172.18.0.14 | DN QJM1 ND |
| hadoop5 | 172.18.0.15 | DN QJM2 ND |
| hadoop6 | 172.18.0.16 | DN QJM3 ND |
目前已经安装了hdfs yarn zookeeper
2.配置HA
2.1 修改hdfs-site.xml
<property><name>dfs.nameservices</name><value>dockercluster</value></property><property><name>dfs.ha.namenodes.dockercluster</name><value>nn1,nn2</value></property><property><name>dfs.namenode.rpc-address.dockercluster.nn1</name><value>hadoop1:8020</value></property><property><name>dfs.namenode.rpc-address.dockercluster.nn2</name><value>hadoop2:8020</value></property><property><name>dfs.namenode.http-address.dockercluster.nn1</name><value>hadoop1:50070</value></property><property><name>dfs.namenode.http-address.dockercluster.nn2</name><value>hadoop2:50070</value></property><property><name>dfs.namenode.shared.edits.dir</name><value>qjournal://hadoop4:8485;hadoop5;hadoop6:8485/dockercluster</value></property><property><name>dfs.client.failover.proxy.provider.dockercluster</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property><property><name>dfs.ha.fencing.methods</name><value>sshfence</value></property><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>/home/hdfs/.ssh/id_rsa</value></property><property><name>dfs.ha.fencing.ssh.connect-timeout</name><value>30000</value></property><property><name>dfs.journalnode.edits.dir</name><value>/opt/hadoop/hadoop-2.7.3/JNSdatadir</value></property>
2.2 设置core-site.xml
<property><name>fs.defaultFS</name><value>hdfs://dockercluster</value></property>
将以上配置文件分发到各个节点上
3.配置手动HA
3.1 关闭YARN、HDFS
关闭yarn:
resourcemanager上执行:$HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanagernodemanager上执行:$HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager
关闭hdfs:
namenode上执行:$HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenodedatanode执行:$HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
如果配置了ssh免密码登录:
stop-yarn.sh
stop-dfs.sh
3.2 启动HDFS HA
-
启动journal节点
在hadoop4 5 6上执行:hadoop-daemon.sh start journalnode
-
在standby namenode上执行初始化
[hdfs@hadoop2 hadoop-2.7.3]$ hdfs namenode -bootstrapStandby.........17/04/1918:24:17 INFO ipc.Client:Retrying connect to server: hadoop1/172.18.0.11:8020.Already tried 9 time(s);retry policy isRetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)17/04/1918:24:17 FATAL ha.BootstrapStandby:Unable to fetch namespace information from active NN at hadoop1/172.18.0.11:8020:CallFrom hadoop2/172.18.0.12 to hadoop1:8020 failed on connection exception: java.net.ConnectException:Connection refused;For more details see: http://wiki.apache.org/hadoop/ConnectionRefused17/04/1918:24:17 INFO util.ExitUtil:Exitingwith status 217/04/1918:24:17 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at hadoop2/172.18.0.12
该语句把原namenode上的元数据目录复制到本节点上的相应目录中
执行这个语句报错了,连接不上hadoop1:8020.看来官网写的不对啊,操。在执行这一步之前要先启动原来的namenode.先向下继续。
- 初始化edit log
在原来的namenode上执行:
[hdfs@hadoop1 namenodedir]$ hdfs namenode -initializeSharedEdits.....17/04/1918:28:24 INFO namenode.EditLogInputStream:Fast-forwarding stream '/opt/hadoop/hadoop-2.7.3/namenodedir/current/edits_0000000000000000001-0000000000000000256' to transaction ID 117/04/1918:28:24 INFO namenode.FSEditLog:Starting log segment at 117/04/1918:28:24 INFO namenode.FSEditLog:Ending log segment 117/04/1918:28:24 INFO namenode.FSEditLog:Number of transactions:256Total time for transactions(ms):63Number of transactions batched inSyncs:0Number of syncs:1SyncTimes(ms):1117/04/1918:28:24 INFO util.ExitUtil:Exitingwith status 017/04/1918:28:24 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at hadoop1/172.18.0.11************************************************************
这个脚本会把本地的edit log复制到journal节点中去
- 启动原来的namenode
$HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
- 再次初始化standby namenode
刚才初始化失败了,再来一次:
=====================================================Re-format filesystem inStorageDirectory/opt/hadoop/hadoop-2.7.3/namenodedir ?(Y or N) y17/04/1920:30:38 INFO common.Storage:Storage directory /opt/hadoop/hadoop-2.7.3/namenodedir has been successfully formatted.17/04/1920:30:38 WARN common.Util:Path/opt/hadoop/hadoop-2.7.3/namenodedir should be specified as a URI in configuration files.Please update hdfs configuration.17/04/1920:30:38 WARN common.Util:Path/opt/hadoop/hadoop-2.7.3/namenodedir should be specified as a URI in configuration files.Please update hdfs configuration.17/04/1920:30:39 INFO namenode.TransferFsImage:Opening connection to http://hadoop1:50070/imagetransfer?getimage=1&txid=256&storageInfo=-63:51947955:0:CID-3adccc69-45b5-4b44-81b6-70ab593cc1ed17/04/1920:30:39 INFO namenode.TransferFsImage:ImageTransfer timeout configured to 60000 milliseconds17/04/1920:30:39 INFO namenode.TransferFsImage:Transfer took 0.00s at 500.00 KB/s17/04/1920:30:39 INFO namenode.TransferFsImage:Downloaded file fsimage.ckpt_0000000000000000256 size 2780 bytes.17/04/1920:30:39 INFO util.ExitUtil:Exitingwith status 017/04/1920:30:39 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at hadoop2/172.18.0.12************************************************************/
这次就行了!!
记住,选"Re-format filesystem in Storage Directory /opt/hadoop/hadoop-2.7.3/namenodedir ? (Y or N) Y"
7.启动standby namenode
$HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
这次启动成功了。
所以,正确的顺序是:启动journal->初始化edit log->启动nameode->初始化standby->启动standby
8.检查HA web页面:
看到两个namenode都是standby!
那手动切换一下,设置hadoop1上的namenode为active:
[hdfs@hadoop2 hadoop]$ hdfs haadmin -failover --forceactive nn1 nn217/04/1920:51:59 WARN ha.FailoverController:Serviceisnot ready to become active, but forcing:TheNameNodeisin safemode.The reported blocks 0 needs additional 16 blocks to reach the threshold 0.9990 of total blocks 16.The number of live datanodes 0 has reached the minimum number 0.Safe mode will be turned off automatically once the thresholds have been reached.Failoverfrom nn1 to nn2 successful
这样就把nn2切换为active的namenode了
为了验证sshfence的正确性,你可以反复的切几次~
4.配置自动HA
4.1 关闭集群
在两个namenode上:
$HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
在所有datanode上:
$HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
4.2 修改配置文件
1.在hdfs-site.xml中
<property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property>
2.在core-site.xml中:
<property><name>ha.zookeeper.quorum</name><value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value></property>
将以上文件分发到各个节点。
4.3 启动HA
1.初始化ZKFC
在任意一个namenode上执行:
[hdfs@hadoop1 hadoop]$ hdfs zkfc -formatZK...........17/04/1921:09:09 INFO zookeeper.ClientCnxn:Socket connection established to hadoop2/172.18.0.12:2181, initiating session17/04/1921:09:09 INFO zookeeper.ClientCnxn:Session establishment complete on server hadoop2/172.18.0.12:2181, sessionid =0x25b6d24ae630000, negotiated timeout =500017/04/1921:09:09 INFO ha.ActiveStandbyElector:Session connected.17/04/1921:09:10 INFO ha.ActiveStandbyElector:Successfully created /hadoop-ha/dockercluster in ZK.17/04/1921:09:10 INFO zookeeper.ZooKeeper:Session:0x25b6d24ae630000 closed17/04/1921:09:10 INFO zookeeper.ClientCnxn:EventThread shut down
2.启动ZKFC
在每个namenode上执行:
$HADOOP_HOME/sbin/hadoop-daemon.sh --script $HADOOP_HOME/bin/hdfs start zkfc
注:如果配了SSH免密码登录,直接用start-dfs.sh启动集群即可
3.启动namenode
$HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
4.进web查看状态
4.4 测试自动切换
在hadoop1上直接kill n11:
[hdfs@hadoop1 hadoop]$ jps5588NameNode5501DFSZKFailoverController5695Jps[hdfs@hadoop1 hadoop]$ kill 5501
再看看:
说明自动切换正常!