1、安装包下载:方法自行get
2、先安装jdk,
将压缩包解压:tar zxvf jdk-7u67-linux-x64.tar.gz配置jdk环境变量:master、slave1、slave2:vim ~/.bashrc 将JDK分别拷贝到从节点上: scp -r /usr/local/src/jdk1.7.0_67 root@slave1:/usr/local/src/jdk1.7.0_67 scp -r /usr/local/src/jdk1.7.0_67 root@slave2:/usr/local/src/jdk1.7.0_673、hadoop安装:
解压安装包到指定目录下:tar zxvf hadoop-2.6.1.tar.gz 修改Hadoop配置文件:master :cd hadoop-2.6.1/etc/hadoop vim hadoop-env.sh export JAVA_HOME=/usr/local/src/jdk1.7.0_67 vim yarn-env.sh export JAVA_HOME=/usr/local/src/ jdk1.7.0_67 vim slaves slave1 slave2 vim core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.142.10:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/src/hadoop-2.6.1/tmp</value> </property> </configuration> vim hdfs-site.xml <configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>master:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/src/hadoop-2.6.1/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/src/hadoop-2.6.1/dfs/data</value> </property> <property> <name>dfs.repliction</name> <value>3</value> </property> </configuration> vim mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> vim yarn-site.xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8035</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> </configuration> #创建临时目录和文件目录 mkdir /usr/local/src/hadoop-2.6.1/tmp mkdir -p /usr/local/src/hadoop-2.6.1/dfs/name mkdir -p /usr/local/src/hadoop-2.6.1/dfs/data 配置环境变量:master、slave1、slave2:vim ~/.bashrc HADOOP_HOME=/usr/local/src/hadoop-2.6.1 export PATH=$PATH:$HADOOP_HOME/bin拷贝安装包:
scp -r /usr/local/src/hadoop-2.6.1 root@slave1:/usr/local/src/hadoop-2.6.1 scp -r /usr/local/src/hadoop-2.6.1 root@slave2:/usr/local/src/hadoop-2.6.1 启动集群 #Master #初始化Namenode hadoop namenode -format //只能初始化一次,如果安装的有问题,需要将三个节点中dfs下的文件都删掉。 rm -rf /home/hadoop/hadoop/dfs 启动hadoop集群:./sbin/start-all.sh
查看master进程状态:jps
查看slave1、slave2进程状态:jps
关闭集群:./sbin/stop-all.sh
这里就是整个hadoop集群搭建的过程!