groupadd hadoop
将ubuntu添加至hadoop: usermod -g hadoop ubuntu 查看当前用户所在的组: exit ubuntu groups 显示 hadoop 编辑hosts: sudo nano /etc/hosts修改成如下内容
127.0.0.1localhost127.0.1.1ubuntu
#根据实际ip修改
192.168.0.7 master 192.168.0.10 master2 192.168.0.4 slave1 192.168.0.9 slave2 192.168.0.3 slave3 192.168.0.8 rm # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 3、ssh无密登陆 (1)master 需要无密登录所有的机器 (2)master2 需要无密登录master (3)rm 需要无密登录slave1 slave2 slave3 (4)无密登陆实现 以master登陆到master2为例 当前位置master,执行 如未安装ssh,执行 sudo apt-get install openssh-server 成功后,执行 ssh-keygen -t rsa 一路enter即可 cd ~ cd .ssh ls authorized_keys id_rsa id_rsa.pub 其中 id_rsa是私钥 本机保存不动,id_rsa.pub是公钥需要交给mater2 给id_rsa.pub备份 cp id_rsa.pub id_rsa_m.pub 首先给自己一份 cat id_rsa_m.pub >> authorized_keys 然后发给master2 scp id_rsa_m.pub ubuntu@master2:~/.ssh 切换到master2,执行 cd ~ cd .ssh cat id_rsa_m.pub >> authorized_keys 切换master,执行ssh master2
成功登陆到了master2
ubuntu@master2: exit ubuntu@master: 3、安装jdk jdk版本: jdk-8u91-linux-x64.tar.gz 117.118M 将jdk-8u91-linux-x64.tar.gz拷贝至master的/tmp目录下,解压至 /home/ubuntu/solf下, 所有与hadoop有关的软件我们都将安装到这个目录 开始解压 tar -zxvf jdk-8u91-linux-x64.tar.gz -C /home/ubuntu/solf cd ~/solf ls jdk1.8.0_91 添加环境变量,编辑/etc/profile,在文件末尾添加如下内容 JAVA_HOME = /home/ubuntu/solf/jdk1.8.0_91 export PATH=$PATH:$JAVA_HOME export PATH=$PATH:$JAVA_HOME/bin export PATH=$PATH:$JAVA_HOME/jresource /etc/profile
ubuntu@master:~/solf$ java -version java version "1.8.0_91" Java(TM) SE Runtime Environment (build 1.8.0_91-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode) 表示jdk安装成功 4、安装hadoop hadoop版本:hadoop-2.7.2.tar.gz 207.077M 将hadoop-2.7.2.tar.gz解压至 /home/ubuntu/solf tar -zxvf hadoop-2.7.2.tar.gz -C /home/ubuntu/solf ls hadoop-2.7.2 jdk1.8.0_91 添加环境变量,编辑/etc/profile,在文件末尾添加如下内容 HADOOP_INSTALL =/home/ubuntu/solf/hadoop-2.7.2 export PATH=$PATH:$HADOOP_INSTALL export PATH=$PATH:$HADOOP_INSTALL/bin export PATH=$PATH:$HADOOP_INSTALL/sbin source /etc/profile ubuntu@master:~/solf$ hadoop version Hadoop 2.7.2 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41 Compiled by jenkins on 2016-01-26T00:08Z Compiled with protoc 2.5.0 From source with checksum d0fda26633fa762bff87ec759ebe689c This command was run using /home/ubuntu/solf/hadoop-2.7.2/share/hadoop/common/hadoop-common-2.7.2.jar 表示hadoop安装成功 5、配置hadoop环境 hadoop可以配置为三种模式:独立、伪分布、全分布(集群)模式 为了让我们的hadoop能够在各种模式中切换,此处需做一个链接操作 cd ~ cd solf 为了以后cd时方便,此处给hadoop-2.7.2做一个链接 ln -s hadoop-2.7.2 hadoop cd ~/solf/hadoop/etc/ ls hadoop 此时只有一个hadoop目录,此目录是hadoop默认的配置文件的目录,如果要做模式切换一个目录肯定是不行的 cp -r hadoop hadoop-full //集群模式 cp -r hadoop hadoop-local //独立模式 cp -r hadoop hadoop-presudo //伪分布模式 rm -r hadoop //删除原来的目录 如果当前我想使用集群模式则(其他模式同理) ln -s hadoop-full hadoop ls -l lrwxrwxrwx 1 ubuntu ubuntu 11 Aug 3 07:21 hadoop -> hadoop-full drwxr-xr-x 2 ubuntu hadoop 4096 Aug 1 09:55 hadoop-full drwxr-xr-x 2 ubuntu hadoop 4096 Aug 1 09:55 hadoop-local drwxr-xr-x 2 ubuntu hadoop 4096 Aug 1 09:55 hadoop-presudo 此时默认的配置文件指向了hadoop-full 下面来详细配置hadoop-full中的文件: (1)hadoo-env.sh 只需修改一行,JAVA_HOME配置成/etc/profile里面 JAVA_HOME的路径,最好使用绝对路径 export JAVA_HOME=/home/ubuntu/solf/jdk1.8.0_91 (2)core-site.xml <!-- 指定hdfs的nameservice为ns1,ns1中包含了两个namenode,保证一个namenode宕机了能够顺利由另一个接管--> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <!-- 指定定位临时文件的目录,默认指向/tmp,该目录会在重启后被清空--> <property> <name>hadoop.tmp.dir</name> <value>/home/ubuntu/solf/hadoop/tmp</value> </property> <!-- 指定zookeeper安装的位置,此处将在slave1,slave2,slave3上安装zk,注意安装zk的机器数一定是大于等于3的奇数--> <property> <name>ha.zookeeper.quorum</name> <value>slave1:2181,slave2:2181,slave3:2181</value> </property> (3)hdfs-site.xml <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>master:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>master:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>master2:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>master2:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://slave1:8485;slave2:8485;slave3:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/ubuntu/solf/hadoop-2.7.2/journal</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> <!--防止出现不允许远程读写hdfs--> <property> <name>dfs.permissions</name> <value>false</value> </property> <!--namenode每三秒发一个心跳信号--> <property> <name>dfs.heartbeat.interval</name> <value>3</value> </property> <!--心跳机制recheck的间隔--> <property> <name>dfs.namenode.heartbeat.recheck-interval</name> <value>35000</value> </property> (4)mapred-site.xml 原目录下没有mapred-site.xml,需要cp cp mapred-site.xml.template mapred-site.xml 修改如下: <!--指定MapReduce使用的框架是yarn --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> (5)yarn-site.xml <!-- 指定resourcemanager地址 --> <property> <name>yarn.resourcemanager.hostname</name> <value>rm</value> </property> <!-- 指定nodemanager启动时加载server的方式为shuffle server --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> (6)slaves 指定datanode的主机名 slave1 slave2 slave3 6、将jdk和hadoop环境发送至所有的机器 cd ~ scp -r solf ubuntu@master2:/home/ubuntu scp -r solf ubuntu@rm:/home/ubuntu scp -r solf ubuntu@slave1:/home/ubuntu scp -r solf ubuntu@slave2:/home/ubuntu scp -r solf ubuntu@slave3:/home/ubuntu cd /etc su 切换root用户 root@master:/etc scp profile root@master2:/etc scp profile root@rm:/etc scp profile root@slave1:/etc scp profile root@slave2:/etc scp profile root@slave3:/etc 分别进入到各台机器中执行 source /etc/profile java -version hadoop version 如果都能正确显示,则说明各台机器上的jdk和hadoop环境都搭建好了 7、安装配置zookeeper zookeeper版本:zookeeper-3.4.5.tar.gz 16.018M 将zookeeper-3.4.5.tar.gz解压至 /home/ubuntu/solf tar -zxvf zookeeper-3.4.5.tar.gz -C /home/ubuntu/solf lshadoop-2.7.2 jdk1.8.0_91 zookeeper-3.4.5
进入slave1 (1)修改配置 cd ~/solf/zookeeper-3.4.5/conf cp zoo_sample.cfg zoo.cfg 编辑 nano zoo.cfg 修改 dataDir=/home/ubuntu/solf/zookeeper-3.4.5/tmp 文件末尾添加 server.1=slave1:2888:3888 server.2=slave2:2888:3888 server.3=slave3:2888:3888 (2)创建tmp文件 cd ~/solf/zookeeper-3.4.5 mkdir tmp cd tmp touch myid echo 1 > myid 查看 cat myid 1 表示配置成功 (3)将配置好的zookeeper-3.4.5发送至slave2、slave3 cd ~/solf scp -r zookeeper-3.4.5 ubuntu@slave2:~/solf scp -r zookeeper-3.4.5 ubuntu@slave3:~/solf 修改myid slave2下: echo 2 > myid slave3下: echo 3 > myid 至此,六台机器的hadoop集群就配置完成了。 启动集群: 严格按照下面的启动顺序 (1)启动zk 在slave1 slave2 slave3下一次执行 cd ~/solf/zookeeper-3.4.5/bin ./zkServer.sh start 查看状态 ./zkServer.sh status JMX enabled by default Using config: /home/ubuntu/solf/zookeeper-3.4.5/bin/../conf/zoo.cfg Mode: follower 查看进程 jps 51858 Jps 51791 QuorumPeerMain (2)启动journalnode 在master下,执行 hadoop-daemons.sh start journalnode slave2: starting journalnode, logging to /home/ubuntu/solf/hadoop-2.7.2/logs/hadoop-ubuntu-journalnode-slave2.out slave1: starting journalnode, logging to /home/ubuntu/solf/hadoop-2.7.2/logs/hadoop-ubuntu-journalnode-slave1.out slave3: starting journalnode, logging to /home/ubuntu/solf/hadoop-2.7.2/logs/hadoop-ubuntu-journalnode-slave3.out 在slave1 slave2 slave3上分别查看进程 51958 JournalNode 52007 Jps 51791 QuorumPeerMain (3)格式化HDFS 如果hdfs从未格式化则在master下,执行 hdfs namenode -format 然后将 hadoop下的tmp文件发送给master3的hadoop目录下 cd ~/solf/hadoop scp tmp ubuntu@master2:~/solf/hadoop (4)格式化zk 在master下,执行 如果zookeeper从未格式化则在master下,执行 hdfs zkfc -formatZK (5)启动hdfs 在master下,执行 start-dfs.sh master2: starting namenode, logging to /home/ubuntu/solf/hadoop-2.7.2/logs/hadoop-ubuntu-namenode-master2.out master: starting namenode, logging to /home/ubuntu/solf/hadoop-2.7.2/logs/hadoop-ubuntu-namenode-master.out slave2: starting datanode, logging to /home/ubuntu/solf/hadoop-2.7.2/logs/hadoop-ubuntu-datanode-slave2.out slave1: starting datanode, logging to /home/ubuntu/solf/hadoop-2.7.2/logs/hadoop-ubuntu-datanode-slave1.out slave3: starting datanode, logging to /home/ubuntu/solf/hadoop-2.7.2/logs/hadoop-ubuntu-datanode-slave3.out Starting journal nodes [slave1 slave2 slave3] slave2: journalnode running as process 2299. Stop it first. slave1: journalnode running as process 2459. Stop it first. slave3: journalnode running as process 51958. Stop it first. Starting ZK Failover Controllers on NN hosts [master master2] master: starting zkfc, logging to /home/ubuntu/solf/hadoop-2.7.2/logs/hadoop-ubuntu-zkfc-master.out master2: starting zkfc, logging to /home/ubuntu/solf/hadoop-2.7.2/logs/hadoop-ubuntu-zkfc-master2.out 查看本机进程 jps 14481 NameNode 14885 Jps 14780 DFSZKFailoverController 12764 FsShell 查看slave1\2\3的java上的java 进程 jps 51958 JournalNode 52214 Jps 52078 DataNode 51791 QuorumPeerMain (6)启动yarn 在rm下,执行 start-yarn.sh jps 6965 ResourceManager 7036 Jps 查看slave1\2\3的java 进程 jps 52290 NodeManager 51958 JournalNode 52410 Jps 52078 DataNode 51791 QuorumPeerMain 若所有进程都如上所示,则hadoop集群就真正运行起来了,可在浏览器中输入 192.168.0.7:50070 查看集群的情况 测试集群: 1、测试namenode切换 (1)访问192.168.0.7:50070 显示 'master:9000'(active) 访问192。168.0.10:50070 显示'master2:9000'(standby) 表明当前namenode由master管理,master2作为备用 (2)在master下kill掉namenode进程 kill -9 14481 jps 14944 Jps 14780 DFSZKFailoverController 12764 FsShell 再次启动namenode进程 hadoop-daemon.sh start namenode 再执行(1)中的操作,显示如下: 显示'master:9000'(standby) 'master2:9000'(active) 如此证明:zookeeper在namenode宕机的情况下能够正常切换备用的namenode机器 2、验证hdfs文件存储系统 查看hdfs的文件结构 hadoop fs -ls -R / 创建solf文件夹 hadoop fs -mkdir /solf上传文件
cd 到文件所在目录
hadoop fs -put spark-2.0.0-bin-without-hadoop.tgz /solf 查看hdfs的文件结构 hadoop fs -ls -R / drwxr-xr-x - ubuntu supergroup 0 2017-08-04 07:16 /solf -rw-r--r-- 3 ubuntu supergroup 114274242 2017-08-04 07:16 /solf/spark-2.0.0-bin-without-hadoop.tgz 上传成功! 3、运行wordcount程序 用eclipse生成wordcount.jar或者使用hadoop自带的例子 将输入文件上传至hdfs cd 到文件所在目录 hadoop fs -put file*.txt /input cd到wordcount.jar所在目录 hadoop jar wordcount.jar com.will.hadoop.WordCount /input /wcout 17/08/04 07:42:01 INFO client.RMProxy: Connecting to ResourceManager atrm/192.168.0.8:8032 17/08/04 07:42:02 WARN mapreduce.JobResourceUploader: Hadoop command-line optionparsing not performed. Implement the Tool interface and execute your application withToolRunner to remedy this. 17/08/04 07:42:02 INFO input.FileInputFormat: Total input paths to process : 3 17/08/04 07:42:03 INFO mapreduce.JobSubmitter: number of splits:3 17/08/04 07:42:03 INFO mapreduce.JobSubmitter: Submitting tokens for job:job_1501854022188_0002 17/08/04 07:42:10 INFO impl.YarnClientImpl: Submitted applicationapplication_1501854022188_0002 17/08/04 07:42:11 INFO mapreduce.Job: The url to track the job:http://rm:8088/proxy/application_1501854022188_0002/ 17/08/04 07:42:11 INFO mapreduce.Job: Running job: job_1501854022188_0002 17/08/04 07:42:26 INFO mapreduce.Job: Job job_1501854022188_0002 running in ubermode : false 17/08/04 07:42:26 INFO mapreduce.Job: map 0% reduce 0% 17/08/04 07:42:48 INFO mapreduce.Job: map 100% reduce 0% 17/08/04 07:43:10 INFO mapreduce.Job: map 100% reduce 100% 17/08/04 07:43:11 INFO mapreduce.Job: Job job_1501854022188_0002 completedsuccessfully 打开/wcout/part-r-00000 I 2 apple 4 car 4 cat 4 exit 4 feel 2 first 3 good 4 good,so 2 gula hadoop 3 happy 4 happy! 2 hello 1 is 3 my 3 pande 4 peer 4 quit 4 test 1 testxx 2 this 3 测试成功! 由此可见,hadoop集群搭建成功! 接下来将陆续引入HBase、Hive至本集群......