先前已经做了准备工作安装jdk什么的,以下開始ssh免password登陆。这里我们用的是PieTTY工具,当然你也能够直接在linux下直接操作
ssh(secure shell),运行命令 ssh-keygen -t rsa产生密钥,位于~/.ssh目录中
一路enter
复制为文件authorized_keys
登陆成功和退出
接下来传输jdk和hadoop文件,这里用的工具是WinScp(类似ftp上传工具),有的虚拟机能够设置直接从物理机拖拽。我这放到/root/Downloads下
然后拷贝到usr/local,运行命令一次是 rm -rf *删除目录下东东。cd /usr/local接着 cp /root/Downloads/* .
解压jdk首先得获取权限chmod u+x jdk-6u24-linux-i586.bin 然后解压./jdk-6u24-linux-i586.bin也能够用tar -xvf jdk-6u24-linux-i586.bin
接着配置文件进入vi /etc/profile,source /etc/profile 运行
解压hadoop,tar -zxvf hadoop-1.1.2.tar.gz
改动配置文件hadoop/conf下的
hadoop-env.sh
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use. Required.
export JAVA_HOME=/usr/local/jdk
core-site.xml改动内容与本机实际情况同样
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?
>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://centos:9000</value>
<description>change your
own hostname</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
<description>change your own position</description>
</property>
</configuration>
hdfs-site.xml改动,这里是伪分布。所以复制的节点就设置为1
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false </value>
</property>
mapred-site.xml中改动
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>centos:9001</value>
<description>change your own hostname</description>
</property>
</configuration>
对hadoop进行格式化 hadoop namenode -format
启动hadoop 运行start-all.sh
在启动的时候弹出的警告。如今我们来分析下,首先得找到问题源start-all.sh ,我们停止进程stop-all.sh去看下源码
cd hadoop/bin进入后 more start-all.sh
if推断语句运行else。所以我们去查看hadoop-config.sh
看到warning的地方,为了破坏该条件,改动配置文件 vi /etc/profile,仅仅要不为空即可