1 Hbase宕机分析
1.1 宕机问题分析
- Hbase的Regionserver进程随机挂掉(该异常几乎每次都发生,只是挂掉的Regionserver节点不同)
- HMaster进程随机挂掉
- 主备Namenode节点随机挂掉
- Zookeeper节点随机挂掉
- Zookeeper连接超时
- JVM GC睡眠时间过长
- datanode写入超时
1.2 解决该问题而实施的各类配置及集群调整
问题解决需从以下几个方面着手:
1、Hbase的ZK连接超时相关参数调优:默认的ZK超时设置太短,一旦发生FULL GC,极其容易导致ZK连接超时;
2、Hbase的JVM GC相关参数调优:可以通过GC调优获得更好的GC性能,减少单次GC的时间和FULL GC频率;
3、ZK Server调优:这里指的是ZK的服务端调优,ZK客户端(比如Hbase的客户端)的ZK超时参数必须在服务端超时参数的范围内,否则ZK客户端设置的超时参数起不到 效果;
4、HDFS读写数据相关参数需调优;
5、YARN针对各个节点分配资源参数调整:YARN需根据真实节点配置分配资源,之前的YARN配置为每个节点分配的资源都远大于真实虚拟机的硬件资源;
6、集群规划需优化:NameNode、NodeManager、DataNode,RegionServer会混用同一个节点,这样会导致这些关键的枢纽节点通信和内存压力过大,从而在计算压力 较大时容易发生异常。正确的做法是将枢纽节点(NameNode,ResourceManager,HMaster)和数据+计算节点分开
1.2.1 Hbase hbase-site.xml配置调整
<property>
<name>zookeeper.session.timeout</name>
<value>300000</value>
<description>session time</description>
</property>
<property>
<name>hbase.zookeeper.property.tickTime</name>
<value>60000</value>
</property>
<property>
<name>hbase.hregion.memstroe.mslab.enable</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.maxClientCnxns</name>
<value>10000</value>
</property>
<property>
<name>hbase.client.scanner.timeout.period</name>
<value>240000</value>
</property>
<property>
<name>hbase.rpc.timeout</name>
<value>280000</value>
</property>
<property>
<name>hbase.hregion.max.filesize</name>
<value>107374182400</value>
</property>
<property>
<name>hbase.regionserver.handler.count</name>
<value>100</value>
</property>
<property>
<name>dfs.client.socket-timeout</name>
<value>300000</value>
<description>Down the DFS timeout from 60 to 10 seconds.</description>
</property>
1.2.2 Hbase hbase-env.sh配置调整
export HBASE_HEAPSIZE=2048M
export HBASE_HOME=/home/hadoop2015/Hbase/hbase-0.98.6
export HBASE_LOG_DIR=${HBASE_HOME}/logs
export HBASE_OPTS="-server -Xms1g -Xmx1g -XX:NewRatio=2 -XX:PermSize=128m -XX:MaxPermSize=128m -verbose:gc -Xloggc:$HBASE_HOME/logs/hbasegc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=$HBASE_HOME/logs"
注意:如果让该参数生效,一定要配置HBASE_HOME,在上面的配置中会用到变量HBASE_HOME.
1.2.3 zookeeper zoo.cfg配置调整
syncLimit=10
maxSessionTimeout=300000
dataDir=/home/fulong/Zookeeper/CDH/zookdata
clientPort=2181
1.2.4 zookeeper log4j.properties配置调整
修改以下两个文件是为了跟踪ZK日志,ZK的默认日志查看不方便。
zookeeper.root.logger=INFO,CONSOLE,ROLLINGFILE
zookeeper.console.threshold=INFO
zookeeper.log.dir=/home/fulong/Zookeeper/CDH/zooklogs
zookeeper.log.file=zookeeper.log
zookeeper.log.threshold=DEBUG
zookeeper.tracelog.dir=/home/fulong/Zookeeper/CDH/zooklogs
zookeeper.tracelog.file=zookeeper_trace.log
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold}
log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file}
# Max log file size of 10MB
log4j.appender.ROLLINGFILE.MaxFileSize=50MB
1.2.5 HDFS hdfs-site.xml配置调整
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>600000</value>
</property>
<property>
<name>dfs.client.socket-timeout</name>
<value>300000</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
1.2.6 YARN yarn-site.xml配置调整
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<name>yarn.scheduler.fair.user-as-default-queue</name>
<value>false</value>
</property>
<property>
<name>yarn.resourcemanager.zk-timeout-ms</name>
<value>120000</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>3072</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>128</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>3072</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>1</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>1</value>
</property>
<property>
<name>yarn.nodemanager.container-monitor.interval-ms</name>
<value>300000</value>
</property>
1.2.7 集群调整
NN Active, NN Standby, RM Active, RM Standby 所在节点均不运行DN,NM,RS