HDFS高可用性HA环境搭建

xiaoxiao2021-02-28  17

1、下载相应的Zookeeper

2、配置zookeeper:zookeeper-3.4.5/conf/zoo.cfg

# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/opt/cdh5.14.2/zookeeper-3.4.5/data/zkData # the port at which the clients will connect clientPort=2181 server.1=master.cdh.com:2888:3888 server.2=slave1.cdh.com:2888:3888 server.3=slave2.cdh.com:2888:3888 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1

3、配置Hadoop下的hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.permissions.enabled</name> <value>false</value> </property> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>master.cdh.com:8020</value> </property> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>slave1.cdh.com:8020</value> </property> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>master.cdh.com:50070</value> </property> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>slave1.cdh.com:50070</value> </property> <!--指定journalnode日志节点的url--> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://master.cdh.com:8485;slave1.cdh.com:8485;slave2.cdh.com:8485/ns1</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/opt/cdh5.14.2/hadoop-2.6.0/data/dfs/jn</value> </property> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <!--指定是否开启自动故障转移功能--> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration>

4、配置Hadoop下的core-site.xml

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <!--指定namenode的地址--> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <!--用来指定使用hadoop时产生文件的存放目录--> <property> <name>hadoop.tmp.dir</name> <value>/opt/cdh5.14.2/hadoop-2.6.0/data/tmp</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>master.cdh.com:2181,slave1.cdh.com:2181,slave2.cdh.com:2181</value> </property> </configuration>

5、启动集群:

    a、在三台机器分别启动:

        sbin/hadoop-daemon.sh start journalnode

        sbin/hadoop-daemon.sh start datanode

    b、第一次需要:

        bin/hdfs zkfc -formatZK

        bin/hdfs namenode -format(第一台namenode 主机)

        bin/hdfs namenode -bootstrapStandby(第二台namenode 主机)

    c、三台机器分别启动:

        bin/zkServer.sh start

        sbin/hadoop-daemon.sh start zkfc

    d、在有namenode的主机启动namenode:

        sbin/hadoop-daemon.sh start namenode

 

6、结果:

    

        

                        

 

转载请注明原文地址: https://www.6miu.com/read-2150313.html

最新回复(0)