hbase-1.2.6完全分布式安装

xiaoxiao2021-02-28  106

hbase-1.2.6完全分布式安装

1. 安装环境

软件环境 ubuntu14.04, hadoop2.7.4, hbase1.2.6, zookeeper3.4.1hadoop 集群环境 一台主机名为”hadoop-master”的作为master,另外三台主机名为”hadoop-slave1”,”hadoop-slave2”和”hadoop-slave3”,作为slave。

2. 完全分布式安装

2.1 安装准备

分别从hbase官网和zookeeper官网下载hbase以及zookeeper,并将其解压到hadoop-master主机的/user/local/文件夹中,并分别重命名:

ssh hadoop@hadoop-master cd /usr/local/ sudo tar zxvf hbase-1.2.6-bin.tar.gz -C ./ sudo mv hbase-1.2.6 hbase sudo chmod -R a+w hbase sudo tar zxvf zookeeper-3.4.10.tar.gz -C ./ sudo mv zookeeper-3.4.10 zookeeper sudo chmod -R a+w zookeeper

2. 2 在Hadoop-master上配置Hbase

# config hbase in fully cluster mode ssh hadoop@hadoop-master cd /usr/local/hbase # 1. config java env echo "export JAVA_HOME=/usr/local/jdk1.8.0_144" >> conf/hbase-env.sh # 2. config regionserver #首先需要删除 conf/regionservers中的"localhost" ,然后执行: echo "hadoop-slave1" >> conf/regionservers echo "hadoop-slave2" >> conf/regionservers echo "hadoop-slave3" >> conf/regionservers # 3. config backup master, create file conf/backup-masters sudo vim conf/backup-masters #add "hadoop-slave2" to backup-masters file # 4. config hbase edit conf/hbase-site.xml, add the folloing contents: <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://hadoop-master:9000/hbase</value> </property> # 5. Configure ZooKeeper edit conf/hbase-site.xml, add the folloing contents: <property> <name>hbase.zookeeper.quorum</name> <value>hadoop-master,hadoop-slave1,hadoop-slave2,hadoop-slave3</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/usr/local/zookeeper</value> </property>

2.3 在hadoop-slave1,2,3上配置hbase

# copy hbase and zookeeper to hadoop-slave1,2,3 sudo tar zcvf hbase.tar.gz ./hbase sudo tar zcvf zookeeper.tar.gz ./zookeeper # hadoop-slave1 as an example ssh hadoop-slave1 su root cd /usr/local/ scp hadoop@hadoop-master:/usr/local/hbase.tar.gz ./ sudo tar zxvf hbase.tar.gz -C ./ scp hadoop@hadoop-master:/usr/local/zookeeper.tar.gz ./ sudo tar zxvf zookeeper.tar.gz -C ./ rm *.tar.gz

3. 启动hbase服务

ssh hadoop@hadoop-master cd /usr/local/hbase ./bin/start-hbase.sh

4. 测试安装是否成功

# 查看hdfs上的hbase文件夹中是否有内容 hadoop fs -ls /hbase # 或者 hadoop fs -ls hdfs://hadoop-master:9000/hbase # 进入hbase命令行 bin/hbase shell # 创建表“test” create 'test','cf' # create a table - test # list “test”表信息 list 'test' # 往表中添加数据 put 'test', 'row1', 'cf:a', 'value1' put 'test', 'row2', 'cf:b', 'value2' put 'test', 'row3', 'cf:c', 'value3' # 查询“test”表所有数据 scan 'test' # 删除表test drop 'test'

5. 遇到的问题及解决方法

5.1 问题1

问题:java.net.ConnectException: Connection refused;

java.net.ConnectException: Call From hadoop-master/172.17.0.2 to localhost:50090 failed on connection exception: java.net.ConnectException: Conn ection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) at org.apache.hadoop.ipc.Client.call(Client.java:1415) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970) at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463) at org.apache.hadoop.ipc.Client.call(Client.java:1382) ... 29 more

解决方法

在hbase的根目录中,修改conf/hbase-site.xml。 修改之前的:

<property> <name>hbase.rootdir</name> <value>hdfs://localhost:50090/hbase</value> </property>

修改之后的:

<property> <name>hbase.rootdir</name> <value>hdfs://hadoop-master:50090/hbase</value> </property>

5.2 问题2

问题:com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.; Host Details : local host is: "hadoop-master/172.17.0.2"; destination host is: "hadoop-master":50090; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) at org.apache.hadoop.ipc.Client.call(Client.java:1415) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986) at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970) at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) at java.lang.Thread.run(Thread.java:748) Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag. at com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:94) at com.google.protobuf.CodedInputStream.checkLastTagWas(CodedInputStream.java:124) at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:202) at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241) at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253) at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259) at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49) at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcHeaderProtos.java:2364) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1056) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:950)

解决方法

这是由于hdfs的路径有误,首先通过下面命令,查询hdfs的路径,我这边得到的结果为“hdfs://hadoop-master:9000”

hdfs getconf -confKey fs.default.name

在hbase的根目录中,修改conf/hbase-site.xml中的hdfs的路径: 修改之前的:

<property> <name>hbase.rootdir</name> <value>hdfs://hadoop-master:50090/hbase</value> </property>

修改之后的:

<property> <name>hbase.rootdir</name> <value>hdfs://hadoop-master:9000/hbase</value> </property>
转载请注明原文地址: https://www.6miu.com/read-71199.html

最新回复(0)