一、前期的环境准备参考地址 1、同步时间
[root@CDHnode1 opt]# date Sat May 6 09:42:03 EDT 2017(1)、本地联机
进入/usr/share/zoneinfo/Asia [root@CDHnode2 Asia]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime cp: overwrite ‘/etc/localtime’? [root@CDHnode2 Asia]# date Sat May 6 09:46:04 EDT 2017(2)、ntp(网络时间协议)同步时间 需要在三个节点上都进行操作
安装ntp [root@CDHnode1 Asia]# yum -y install ntp [root@CDHnode1 Asia]# ntpdate pool.ntp.org [root@CDHnode1 Asia]# date Sat May 6 09:57:09 EDT 2017 [root@CDHnode2 Asia]# date Sat May 6 09:57:09 EDT 2017 [root@CDHnode2 Asia]# date Sat May 6 09:57:09 EDT 20172、关闭防火墙
[root@CDHnode1 opt]# systemctl stop firewalld [root@CDHnode1 opt]# systemctl disable firewalld Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.二、开始配置hadoop文件 节点的配置 cdh的下载地址:附 1、slaves文件的配置
CDHnode2 CDHnode32、hadoop-env.sh
export JAVA_HOME=/opt/jdk1.8 export HADOOP_LOG_DIR=/home/hadoopcdh/logs3、hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.permissions.enabled</name> <value>false</value> </property> <property> <name>dfs.nameservices</name> <value>shursulei</value> </property> <property> <name>dfs.ha.namenodes.shursulei</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.shursulei.nn1</name> <value>CDHnode1:9000</value> </property> <property> <name>dfs.namenode.http-address.shursulei.nn1</name> <value>CDHnode1:50070</value> </property> <property> <name>dfs.namenode.rpc-address.shursulei.nn2</name> <value>CDHnode2:9000</value> </property> <property> <name>dfs.namenode.http-address.shursulei.nn2</name> <value>CDHnode2:50070</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://CDHnode1:8485;CDHnode2:8485;CDHnode3:8485/shursulei</value> </property> <property> <name>dfs.client.failover.proxy.provider.shursulei</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoopcdh/journaldata/jn</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>shell(/bin/true)</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>10000</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>100</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>4、yarn-site.xml
<property> <name>yarn.resourcemanager.hostname</name> <value>CDHnode1</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>604800</value> </property>5、mapred-site.xml
<property> <name>mapred.job.tracker</name> <value>CDHnode1:9001</value> </property> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>6、core-site.xml
<property> <name>fs.defaultFS</name> <value>hdfs://CDHnode1:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoopcdh/tmp</value> </property> <property> <name>hadoop.proxyuser.hduser.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hduser.groups</name> <value>*</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>CDHnode1:2181,CDHnode2:2181,CDHnode3:2181</value> </property> <property> <name>hadoop.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>*</value> </property>三、zookeeper-3.4.5-cdh5.4.5.tar.gz的安装、环境变量配置 注意:hadoop-2.6.0-cdh5.4.5.tar.gz与Zookeeper-3.4.5-cdh5.4.5.tar.gz保持一致 1、解压文件 2、进入conf的文件下,进入如下的位置,修改为zoo.cfg
[root@CDHnode1 zookeeper-3.4.5-cdh5.3.2]# cd conf [root@CDHnode1 conf]# ll total 12 -rw-rw-r--. 1 root root 535 Feb 24 2015 configuration.xsl -rw-rw-r--. 1 root root 2693 Feb 24 2015 log4j.properties -rw-rw-r--. 1 root root 808 Feb 24 2015 zoo_sample.cfg [root@CDHnode1 conf]# mv zoo_sample.cfg zoo.cfg [root@CDHnode1 conf]# ll total 12 -rw-rw-r--. 1 root root 535 Feb 24 2015 configuration.xsl -rw-rw-r--. 1 root root 2693 Feb 24 2015 log4j.properties -rw-rw-r--. 1 root root 808 Feb 24 2015 zoo.cfg3、修改zoo.cfg文件
需要修改的地方 dataDir=/home/hadoopcdh/zookpeer*注意此处要手动创建* server.1=CDHnode1:2892:3892 server.2=CDHnode2:2892:3892 server.3=CDHnode3:2892:3892分析 4、进入zookpeer下创建myid文件
[root@CDHnode1 zookpeer]# vi myid [root@CDHnode1 zookpeer]# cat myid 1 [root@CDHnode2 zookpeer]# vi myid [root@CDHnode2 zookpeer]# cat myid 2 [root@CDHnode3 zookpeer]# vi myid [root@CDHnode3 zookpeer]# cat myid 35、将配置好的文件复制到其他的机器中
[root@CDHnode1 soft]# scp zookeeper-3.4.5-cdh5.3.2.tar.gz root@CDHnode2:/home/hadoopcdh/soft [root@CDHnode1 hadoop]# scp ./* root@CDHnode3:/home/hadoopcdh/soft/hadoop-2.6.0-cdh5.4.5/etc/hadoop [root@CDHnode1 conf]# scp ./* root@CDHnode2:/home/hadoopcdh/soft/zookeeper-3.4.5-cdh5.3.2/conf configuration.xsl 100% 535 0.5KB/s 00:00 log4j.properties 100% 2693 2.6KB/s 00:00 zoo.cfg 100% 902 0.9KB/s 00:00 [root@CDHnode1 conf]# scp ./* root@CDHnode3:/home/hadoopcdh/soft/zookeeper-3.4.5-cdh5.3.2/conf configuration.xsl 100% 535 0.5KB/s 00:00 log4j.properties 100% 2693 2.6KB/s 00:00 zoo.cfg 100% 902 0.9KB/s 00:006、在安装zookpeer的机器上启动(1、2、3都安装了) ./bin/zkServer.sh start 此处配置/etc/profile文件可以便于启动zookpeer
#zookpeer export ZOOKEEPER_HOME=/home/hadoopcdh/soft/zookeeper-3.4.5-cdh5.3.2 export PATH=$PATH:$ZOOKEEPER_HOME/bin四、初始化hdfs 1、启动zookpeer
[root@CDHnode3 ~]# zkServer.sh start JMX enabled by default Using config: /home/hadoopcdh/soft/zookeeper-3.4.5-cdh5.3.2/bin/../conf/zoo.cfg Starting zookeeper ... STARTED查看zookpeer的状态:(F F L)
[root@CDHnode1 ~]# zkServer.sh status JMX enabled by default Using config: /home/hadoopcdh/soft/zookeeper-3.4.5-cdh5.3.2/bin/../conf/zoo.cfg Mode: follower [root@CDHnode2 ~]# zkServer.sh status JMX enabled by default Using config: /home/hadoopcdh/soft/zookeeper-3.4.5-cdh5.3.2/bin/../conf/zoo.cfg Mode: follower [root@CDHnode3 ~]# zkServer.sh status JMX enabled by default Using config: /home/hadoopcdh/soft/zookeeper-3.4.5-cdh5.3.2/bin/../conf/zoo.cfg Mode: leader2、启动journalnode(所有journalnode节点都得起) ./sbin/hadoop-daemon.sh start journalnode 这里,CDHNode1、2和3都要做。 3、主节点(CDHNode1)的执行 ./bin/hdfs namenode -format ./bin/hdfs zkfc -formatZK ./bin/hdfs namenode 错误问题一: 这是因为没有配置hdfs-site.xml的ha 错误问题二: 上面的问题未解决 错误问题三: buildSupportsSnappy()Z问题,解决链接: 知识点,CDH默认是没有提供native库的,需要我们自己去编译。 通过hadoop-2.6.0-cdh5.4.5.src.tar.gz来编译,安装snappy并生成hadoop native本地库。下载地址附
如果本文知识点不足,可以参考附