1:vi /mydata/hadoop/etc/hadoop/core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoopmaster:9000</value>
</property>
记得建立相应目录(三台机器都要)
mkdir /hadoop/tmp -p
2:vi /mydata/hadoop/etc/hadoop/hadoop-env.sh
将export JAVA_HOME=${JAVA_HOME}中的${JAVA_HOME}改成jdk安装路径
3:vi /mydata/hadoop/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/hadoop/dfs/name</value>
<description>Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/hadoop/dfs/data</value>
<description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
<description>need not permissions</description>
</property>
记得建立相应目录(三台机器都要)
mkdir /hadoop/dfs/name -p
mkdir /hadoop/dfs/data -p
4:cp /mydata/hadoop/etc/hadoop/mapred-site.xml.template /mydata/hadoop/etc/hadoop/mapred-site.xml
vi /mydata/hadoop/etc/hadoop/mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>hadoopmaster:9001</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/hadoop/var</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
记得建立相应目录(三台机器都要)
mkdir /hadoop/var -p
5: vi /mydata/hadoop/etc/hadoop/slaves
删除缺省的localhost,将两个slave机器的hostname添加到其中。
aeolus-vm1
aeolus-vm2
6:vi /mydata/hadoop/etc/hadoop/yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoopmaster</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
******说明,前几步,salve只是创建文件夹******
7:将hadoop scp到slave
scp -r /mydata/hadoop slave:/mydata/
8:主机namenode格式化,启动!
cd /mydata/hadoop/bin
./hadoop namenode -format
此时,/hadoop/dfs/name/current/目录下会多出如下几个文件
cd /mydata/hadoop/sbin
最后./start-all.sh
JPS确认!
9:web url
输入namenode的地址+50070端口,(HDFS管理界面)
输入namenode的ip+8088端口(MR管理界面)