HBase安装,配置

xiaoxiao2021-02-28  40

一、上传hbase软件,并解压

tar -zxvf hbase-0.94.6.tar.gz

二、修改环境变量

vi /etc/profile # 添加内容: export HBASE_HOME=/home/hadoop/apps/hbase export PATH=$PATH:$HBASE_HOME/bin source /etc/proflie

三、修改配置文件

3.1 修改hbase-env.sh

# The java implementation to use. Java 1.7+ required. export JAVA_HOME=/home/hadoop/apps/jdk1.7.0_45/ # Extra Java CLASSPATH elements. Optional. # export HBASE_CLASSPATH= export JAVA_CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar # The maximum amount of heap to use, in MB. Default is 1000. # export HBASE_HEAPSIZE=1000 # Uncomment below if you intend to use off heap cache. # export HBASE_OFFHEAPSIZE=1000 # For example, to allocate 8G of offheap, to 8G: # export HBASE_OFFHEAPSIZE=8G # see http://wiki.apache.org/hadoop/PerformanceTuning export HBASE_OPTS="-XX:+UseConcMarkSweepGC" # Tell HBase whether it should manage it's own instance of Zookeeper or not. export HBASE_MANAGES_ZK=false

3.2 修改hbse-site.xml

<configuration> <property> <name>hbase.master</name> <value>mini1:60010</value> </property> <property> <name>hbase.master.maxclockskew</name> <value>180000</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://bi/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>mini5:2181,mini6:2181,mini7:2181</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/hadoop/apps/hbase/tmp/zookeeper</value> </property> <property> <!--配置web访问端口 --> <name>hbase.master.info.port</name> <value>60010</value> </property> </configuration>

3.3 配置regionservers

mini5 mini6 mini7

3.4 添加hadoop的core-site.xml和hdfs-site.xml

指明hadoop的nameservice的物理地址

core-site.xml

<!-- Put site-specific property overrides in this file. --> <configuration> <!-- 指定hdfs的nameservice为ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://bi/</value> </property> <!-- 指定hadoop临时目录 --> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/apps/hdpdata/</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>mini5:2181,mini6:2181,mini7:2181</value> </property> </configuration>

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <!--指定hdfs的nameservice为bi,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>bi</value> </property> <!-- bi下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.bi</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.bi.nn1</name> <value>mini1:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.bi.nn1</name> <value>mini1:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.bi.nn2</name> <value>mini2:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.bi.nn2</name> <value>mini2:50070</value> </property> <!-- 指定NameNode的edits元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://mini5:8485;mini6:8485;mini7:8485/bi</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop/apps/journaldata</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.bi</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration>

四、将配置好的hbase分发到mini5、mini6、mini7

scp -r /home/hadoop/apps/hbase mini5:/$PWD scp -r /home/hadoop/apps/hbase mini6:/$PWD scp -r /home/hadoop/apps/hbase mini7:/$PWD

五、启动并检测是否成功

start-hbase.sh

访问web端:mini1:60010,如下图所示

转载请注明原文地址: https://www.6miu.com/read-2619885.html

最新回复(0)