hbase-ha 安装
-1.目标:
10.156.50.35 hmaster 10.156.50.36 hmaster 10.156.50.37 hregionserver
0.准备工作 hadoop 服务器
10.156.50.35 yanfabu2-35.base.app.dev.yf zk1 hadoop1 master1 master 10.156.50.36 yanfabu2-36.base.app.dev.yf zk2 hadoop2 master2 10.156.50.37 yanfabu2-37.base.app.dev.yf zk3 hadoop3 slaver1
1.准备工作 ntp 服务器
yum install ntp –y chkconfig ntpd on vi /etc/ntp.conf 服务器配置: # 设置允许访问ntp-server进行校时的网段 restrict 172.23.27.120 mask 255.255.255.0 nomodify notrap #本地时钟源 server 172.23.27.120 #当外部时钟不可用,使用本地时钟 fudge 172.23.27.120 stratum 10 客户端配置: #设置上层时钟源,设置为ntp server地址 server 172.23.27.120 #允许与上层时钟服务器同步时间 restrict 172.23.27.120 nomodify notrap noquery #本地时钟 server 172.23.27.115 #当上层时钟源不可用,使用本地时钟 fudge 172.23.27.115 stratum 10 运行 服务器端 service ntpd start service ntpd stop ntpstat 客户端 ntpdate –u 172.23.27.120 service ntpd start ntpstat 查看 watch ntpq -p
2.安装hbase
2.0 修改 ~/.bash_profile
vim ~/.bash_profile export HBASE_HOME=/home/zkkafka/hbase export PATH=$HBASE_HOME/bin:$PATH source ~/.bash_profile
2.1 修改 hbase-evn.sh
#开启JAVA_HOME配置 export JAVA_HOME=/home/zkkafka/jdk1.8.0_151/ #关闭HBase自带的zookeeper,使用zookeeper集群 export HBASE_MANAGES_ZK=false
2.2 修改hbase-site.xml
<configuration> <property> <name>hbase.rootdir</name> <value>hdfs://master/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <!--对应的zookeeper集群,不用加端口--> <name>hbase.zookeeper.quorum</name> <value>master1,master2,slaver1</value> </property> </configuration>
2.3 修改regionservers配置
slaver1
2.4 修改 backup-masters
master2
2.5 复制Hadoop配置文件hdfs-site.xml到HBase的conf目录
cp /home/zkkafka/hadoop/etc/hadoop/hdfs-site.xml ./
2.6 复制配置文件到其他节点
scp /home/zkkafka/hbase/conf/* zkkafka@10.156.50.36:/home/zkkafka/hbase/conf/ scp /home/zkkafka/hbase/conf/* zkkafka@10.156.50.37:/home/zkkafka/hbase/conf/
2.7 启动hbase
sh /home/zkkafka/hbase/bin/start-hbase.sh [zkkafka@yanfabu2-35 bin]$ ./start-hbase.sh SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/zkkafka/hbase/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/zkkafka/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] running master, logging to /home/zkkafka/hbase/bin/../logs/hbase-zkkafka-master-yanfabu2-35.base.app.dev.yf.out slaver1: running regionserver, logging to /home/zkkafka/hbase/bin/../logs/hbase-zkkafka-regionserver-yanfabu2-37.base.app.dev.yf.out master2: running master, logging to /home/zkkafka/hbase/bin/../logs/hbase-zkkafka-master-yanfabu2-36.base.app.dev.yf.out
2.8 查看进程hbase
[zkkafka@yanfabu2-35 bin]$ jps 59330 QuorumPeerMain 79763 Jps 56377 Kafka 86680 ResourceManager 86570 DFSZKFailoverController 79514 HMaster √ 86044 JournalNode 87356 NameNode
[zkkafka@yanfabu2-36 ~]$ jps 37365 QuorumPeerMain 99335 Jps 56489 DFSZKFailoverController 99224 HMaster √ 34571 Kafka 56606 NameNode 56319 JournalNode
[zkkafka@yanfabu2-37 ~]$ jps 61619 JournalNode 61829 NodeManager 42955 QuorumPeerMain 73002 HRegionServer √ 40189 Kafka 61693 DataNode 73182 Jps
2.9 查看 web-ui
http://10.156.50.35:16010/master-status http://10.156.50.36:16010/master-status
3. shell命令
cd /home/zkkafka/hadoop/share/hadoop/yarn/lib/ mv jline-0.9.94.jar jline-0.9.94.jar.bak rz jline-2.12.jar
[zkkafka@yanfabu2-35 ~]$ hbase version SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/zkkafka/hbase/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/zkkafka/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase 2.0.5 Source code repository git://dd7c519a402b/opt/hbase-rm/output/hbase revision=76458dd074df17520ad451ded198cd832138e929 Compiled by hbase-rm on Mon Mar 18 00:41:49 UTC 2019 From source with checksum fd9cba949d65fd3bca4df155254ac28c
[zkkafka@yanfabu2-35 lib]$ hbase shell SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/zkkafka/hbase/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/zkkafka/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell Use "help" to get list of supported commands. Use "exit" to quit this interactive shell. For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell Version 2.0.5, r76458dd074df17520ad451ded198cd832138e929, Mon Mar 18 00:41:49 UTC 2019 Took 0.0048 seconds
4.数据库操作
create 'data_analysis',{NAME => 'data_time', VERSIONS => 1},{NAME => 'inamount', VERSIONS => 1} create 'data_analysis', {NAME => 'inaccount', VERSIONS => 1}, {NAME => 'inamount', VERSIONS => 1}, {NAME => 'outaccount', VERSIONS => 1},{NAME => 'outamount', VERSIONS => 1}; put 'data_analysis', '2019-05-19 00:00:00', 'inaccount', '10000'; put 'data_analysis', '2019-05-19 00:00:00', 'inamount', '100'; put 'data_analysis', '2019-05-19 00:00:00', 'outaccount', '10100'; put 'data_analysis', '2019-05-19 00:00:00', 'outamount', '101'; put 'data_analysis', '2019-05-19 00:00:00', 'inaccount:xianxishoudanaccount', '5000'; put 'data_analysis', '2019-05-19 00:00:00', 'inaccount:xianshangshoudanaccount', '5000'; put 'data_analysis', '2019-05-19 00:00:00', 'inamount:xianxishoudanamount', '50'; put 'data_analysis', '2019-05-19 00:00:00', 'inamount:xianshangshoudanamount', '50'; get 'data_analysis' ,'2019-05-19 00:00:00' , 'inaccount' get 'data_analysis' ,'2019-05-19 00:00:00' , 'inaccount:xianxishoudanaccount' get 'data_analysis' ,'2019-05-19 00:00:00' , 'inaccount:xianshangshoudanaccount' scan 'data_analysis' ROW COLUMN+CELL 2019-05-19 00:00:00 column=inaccount:, timestamp=1558080234354, value=10000 2019-05-19 00:00:00 column=inaccount:xianshangshoudanaccount, timestamp=1558080601831, value=5000 2019-05-19 00:00:00 column=inaccount:xianxishoudanaccount, timestamp=1558080601812, value=5000 2019-05-19 00:00:00 column=inamount:, timestamp=1558080234393, value=100 2019-05-19 00:00:00 column=inamount:xianshangshoudanamount, timestamp=1558080601856, value=50 2019-05-19 00:00:00 column=inamount:xianxishoudanamount, timestamp=1558080601844, value=50 2019-05-19 00:00:00 column=outaccount:, timestamp=1558080234406, value=10100 2019-05-19 00:00:00 column=outamount:, timestamp=1558080234417, value=101 flush 'data_analysis'
[zkkafka@yanfabu2-35 bin]$ hdfs dfs -lsr /hbase/data/default/data_analysis lsr: DEPRECATED: Please use 'ls -R' instead. drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:03 /hbase/data/default/data_analysis/.tabledesc -rw-r--r-- 2 zkkafka supergroup 1808 2019-05-17 16:03 /hbase/data/default/data_analysis/.tabledesc/.tableinfo.0000000001 drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:03 /hbase/data/default/data_analysis/.tmp drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a -rw-r--r-- 2 zkkafka supergroup 48 2019-05-17 16:03 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/.regioninfo drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/.tmp drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/.tmp/inaccount drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/.tmp/inamount drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/.tmp/outaccount drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/.tmp/outamount drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/inaccount -rw-r--r-- 2 zkkafka supergroup 5097 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/inaccount/5243c1f49c7b4b0fa91d8df3a936e7a2 drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/inamount -rw-r--r-- 2 zkkafka supergroup 5083 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/inamount/9e7bc1d2a1e64987b90c3254e53c57cb drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/outaccount -rw-r--r-- 2 zkkafka supergroup 4931 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/outaccount/c3217f1ea5a24f3daf1d984f55c78a6b drwxr-xr-x - zkkafka supergroup 0 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/outamount -rw-r--r-- 2 zkkafka supergroup 4926 2019-05-17 16:11 /hbase/data/default/data_analysis/ed3abfb268f14d203f95dd0a45f80b8a/outamount/4061fca2d54e471a86da5290d9a67020 [zkkafka@yanfabu2-35 bin]$
捐助开发者
在兴趣的驱动下,写一个免费
的东西,有欣喜,也还有汗水,希望你喜欢我的作品,同时也能支持一下。 当然,有钱捧个钱场(支持支付宝和微信 以及扣扣群),没钱捧个人场,谢谢各位。
个人主页:http://knight-black-bob.iteye.com/
谢谢您的赞助,我会做的更好!
相关推荐
Hadoop-2.2.0+Hbase-0.96.2+Hive-0.13.1分布式整合,Hadoop-2.X使用HA方式
企业内部实际 hadoop zookeeper hbase搭建步骤明细
ha 方式安装 cdh4,hbase,补充原文档的内容
亲手在Centos7上安装,所用软件列表 apache-flume-1.8.0-bin.tar.gz apache-phoenix-4.13.0-HBase-1.3-bin.tar.gz hadoop-2.7.4.tar.gz hbase-1.3.1-bin.tar.gz jdk-8u144-linux-x64.tar.gz kafka_2.12-1.0.0.tgz ...
cdh5.5.4 集群搭建 【自动化脚本+hadoop-ha,yarn-ha,zk,hbase,hive,flume,kafka,spark】全套高可用环境搭建,还有自动化启动脚本。只需要复制粘贴命令,就可以完成。3台机器。相关资源可以留言发邮件,我发资料。cdh...
hadoop-2.52-hbase-0.14-hadoop2 ha高可用安装,hbase动态添加删除节点,hbase集群正常状态及启动,hbase 问题汇总
HBase 高可用HA
Hadoop2.6.2、Hbase1.1.2、Hive1.2.1 HA
初学Hadoop时试验搭建很多次的集群部署方案,步骤很详细。hadoop-2.6.5.tar.gz zookeeper-3.4.10.tar.gz Hbase1.2.6 ,两个nameNode+三dataNode
Hadoop2.6+HA+Zookeeper3.4.6+Hbase1.0.0 集群安装详细步骤
藏经阁-HBase 高可用HA.pdf
hadoopHA with QJM环境搭建(hdfs+hbase)
第五天 hadoop2.x中HA机制的原理和全分布式集群安装部署及维护 01-zookeeper.avi 02-zookeeper2.avi 03-NN高可用方案的要点1.avi 04-hadoop-HA机制的配置文件.avi 05-hadoop分布式集群HA模式部署.avi 06-hdfs...
Hadoop HA高可用集群搭建(Hadoop+Zookeeper+HBase)
hadoop2.6.0+HA+Zookeeper3.4.6+hbase1.0.0安装配置步骤详细文档,包括各种xml配置文件
1、内容概要:Hadoop+Spark+Hive+HBase+Oozie+Kafka+Flume+Flink+Elasticsearch+Redash等大数据集群及组件搭建指南(详细搭建步骤+实践过程问题总结)。 2、适合人群:大数据运维、大数据相关技术及组件初学者。 3、...
1、HBase2.0集群部署 2、HBase2.0集群部署Ha 3、动态添加从节点 4、生产中需要注的事项
HadoopHA高可用集群配置 yarn-site.xml slave
本文的目的是为当前最新版本的Hadoop 2.8.0提供最为详细的安装说明,以帮助减少安装过程中遇到的困难,并对一些错误原因进行说明,hdfs配置使用基于QJM(Quorum Journal Manager)的HA。本文的安装只涉及了hadoop-...