2.jps命令的真相
2.1位置哪里的
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$ which jps
/usr/java/jdk1.8.0_45/bin/jps
2.2对应的进程的标识文件在哪 /tmp/hsperfdata_进程用户名称
[hadoop@hadoop002 hsperfdata_hadoop]$ pwd
/tmp/hsperfdata_hadoop
[hadoop@hadoop002 hsperfdata_hadoop]$ ll
total 96
-rw------- 1 hadoop hadoop 32768 Feb 16 20:35 1086
-rw------- 1 hadoop hadoop 32768 Feb 16 20:35 1210
-rw------- 1 hadoop hadoop 32768 Feb 16 20:35 1378
[hadoop@hadoop002 hsperfdata_hadoop]$
2.3
root用户看所有用户的jps结果
普通用户只能看自己的
2.4 process information unavailable
真假判断: ps -ef|grep namenode 真正判断进程是否可用
[root@hadoop002 ~]# jps
1520 Jps
1378 -- process information unavailable
1210 -- process information unavailable
1086 -- process information unavailable
[root@hadoop002 ~]#
[root@hadoop002 ~]#
[root@hadoop002 ~]#
[root@hadoop002 ~]#
生产环境:
hadoop: hdfs组件 hdfs用户
root用户或sudo权限的用户取获取
kill: 人为 进程在Linux看来是耗内存最大的 自动给你kill
(kill -9 +进程号 杀死进程以后 jps查看,进程还是存在,处在一个假死状态,但是如果使用kill +进程号,再jps查看,进程就会被杀死并且不会再存在)
[root@hadoop002 tmp]# rm -rf hsperfdata_hadoop
[root@hadoop002 tmp]#
[root@hadoop002 tmp]# jps
1906 Jps
[root@hadoop002 tmp]#
3.pid文件 集群进程启动和停止要使用的文件
[root@hadoop001 tmp]# pwd
/tmp
[root@hadoop001 tmp]# ll
-rw-rw-r-- 1 hadoop hadoop 5 Feb 16 20:56 hadoop-hadoop-datanode.pid
-rw-rw-r-- 1 hadoop hadoop 5 Feb 16 20:56 hadoop-hadoop-namenode.pid
-rw-rw-r-- 1 hadoop hadoop 5 Feb 16 20:57 hadoop-hadoop-secondarynamenode.pid
Linux在tmp命令 定期删除一些文件和文件夹 30天周期
[hadoop@hadoop001 hadoop]$ vi hadoop-env.sh
# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by
# the user that will run the hadoop daemons. Otherwise there is the
# potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_PID_DIR=/data/tmp
mkdir /data/tmp
chmod -R 777 /data/tmp
为了避免pid文件在tmp中被删除,重新建一个路径/data/tmp,并给他777权限 chmod -R 777 /data/tmp,把pid文件放在这里边就不会被删除了