`
knight_black_bob
  • 浏览: 823766 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

linux hadoop-ha 三台机器 安装

阅读更多

linux  hadoop-ha  三台机器 安装

 

0.规定

 

IP                   hostname                 role
10.156.50.35       master1                  主节点
10.156.50.36       master2                  备份主节点
10.156.50.37       slaver1                  从节点

 

 

1. 创建 zkkafka 用户

 

useradd zkkafka
passwd zkkafka

zkkafka

 

 

2.修改 sudoers

 

chmod -R 777 /etc/sudoers

## Allow root to run any commands anywhere 
root	ALL=(ALL) 	ALL
zkkafka ALL=(ALL)       ALL
## Allows members of the 'sys' group to run networking, software, 
## service management apps and more.
# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS

## Allows people in group wheel to run all commands
%wheel	ALL=(ALL)	ALL
%wheel  ALL=(ALL)       NOPASSWD: ALL
## Same thing without a password


chmod -R 440 /etc/sudoers

sudo scp /etc/sudoers root@10.156.50.36:/etc/
sudo scp /etc/sudoers root@10.156.50.37:/etc/

 

 

3.修改 hosts

 

[zkkafka@yanfabu2-35 ~]$ sudo cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.156.50.35 yanfabu2-35.base.app.dev.yf zk1  hadoop1 master1  master
10.156.50.36 yanfabu2-36.base.app.dev.yf zk2  hadoop2 master2
10.156.50.37 yanfabu2-37.base.app.dev.yf zk3  hadoop3 slaver1


sudo scp /etc/hosts root@10.156.50.36:/etc/
sudo scp /etc/hosts root@10.156.50.37:/etc/

 

 

4.修改主机交换空间 集群的selinux 

 

vim /etc/sysctl.conf  
        vm.swappiness = 0 

 setenforce 0  
    vi /etc/selinux/config  
        SELINUX=disabled 

sudo scp /etc/sysctl.conf root@10.156.50.36:/etc/
sudo scp /etc/sysctl.conf root@10.156.50.37:/etc/

 

 

5.关闭防火强

 

service iptables status  
service iptables stop 

 

 

6.生成免密登陆 第一次ssh需要密码

 

master   slave
    ssh-keygen -t rsa 空格 空格  
    cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  

slave 
    ssh-copy-id -i hadoop1
    ssh-copy-id -i hadoop1

master
    chmod 600 authorized_keys
    scp ~/.ssh/authorized_keys zkkafka@10.156.50.36:~/.ssh/  
    scp ~/.ssh/authorized_keys zkkafka@10.156.50.37:~/.ssh/ 

 

 

7.安装hadoop

 

7.1 解压tar xf hadoop-2.6.5.tar.gz

7.2 重命名mv hadoop-2.6.5 hadoop

7.3 创建文件夹mkdir -p tmp hdfs hdfs/datanode hdfs/namenode hdfs/logs hdfs/journal

 

/home/zkkafka/hadoop/tmp
/home/zkkafka/hadoop/hdfs
/home/zkkafka/hadoop/hdfs/datanode
/home/zkkafka/hadoop/hdfs/namenode
/home/zkkafka/hadoop/hdfs/logs
/home/zkkafka/hadoop/hdfs/journal

 

 

7.4 修改环境变量

 

vi ~/.bash_profile

export HADOOP_HOME=/home/zkkafka/hadoop
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH

source ~/.bash_profile

 

 

 

7.5 需要修改文件

 

/home/zkkafka/hadoop/etc/hadoop/hadoop-env.sh
/home/zkkafka/hadoop/etc/hadoop/yarn-env.sh
/home/zkkafka/hadoop/etc/hadoop/core-site.xml
/home/zkkafka/hadoop/etc/hadoop/hdfs-site.xml
/home/zkkafka/hadoop/etc/hadoop/mapred-site.xml
/home/zkkafka/hadoop/etc/hadoop/yarn-site.xml

 

 

7.5.1 修改 hadoop-env.sh

 

export JAVA_HOME=/home/zkkafka/jdk1.8.0_151
export HADOOP_LOG_DIR=/home/zkkafka/hadoop/hdfs/logs

 

 

7.5.2 修改 yarn-env.sh

 

JAVA_HOME=/home/zkkafka/jdk1.8.0_151
JAVA=/home/zkkafka/jdk1.8.0_151/bin/java

export HADOOP_LOG_DIR=/home/zkkafka/hadoop/hdfs/logs

 

 

7.5.3 修改  slaves 

 

hadoop3

 

 

 

7.5.4 修改  core-site.xml

 

<configuration>
  <!-- 指定hdfs的nameservices名称为mycluster,与hdfs-site.xml的HA配置相同 -->
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://master/</value>
  </property>
	
  <!-- 指定缓存文件存储的路径 -->
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/zkkafka/hadoop/tmp</value>
  </property>

  <!-- 配置hdfs文件被永久删除前保留的时间(单位:分钟),默认值为0表明垃圾回收站功能关闭 -->
  <property>
    <name>fs.trash.interval</name>
    <value>1440</value>
  </property>
  
  <!-- 指定zookeeper地址,配置HA时需要 -->
  <property>
    <name>ha.zookeeper.quorum</name>
    <value>hadoop1:2181,hadoop1:2181,hadoop1:2181</value>
  </property>
</configuration>

 

 

7.5.5 修改 hdfs-site.xml

 

<configuration>
<property>    
    <name>dfs.replication</name>    
    <value>2</value>    
  </property>    
  <property>    
    <name>dfs.namenode.name.dir</name>    
    <value>/home/zkkafka/hadoop/hdfs/namenode</value>    
  </property>    
  <property>    
    <name>dfs.datanode.data.dir</name>    
    <value>/home/zkkafka/hadoop/hdfs/datenode</value>    
  </property>    
  <property>  
    <name>dfs.webhdfs.enabled</name>  
    <value>true</value>  
    <!-- 在NN和DN上开启WebHDFS (REST API)功能,不是必须 -->   
  </property>  
  
  <!-- HA配置需要加如下配置-->  
  <property>  
    <name>dfs.nameservices</name>  
    <value>master</value>  
    <!--给hdfs集群起名字,这个名字必须和core-site中的统一,且下面也会用到该名字,需要和core-site.xml中的保持一致 -->  
  </property>  
  
  <property>  
    <name>dfs.ha.namenodes.master</name>  
    <value>nn1,nn2</value>  
    <!-- master1下面有两个NameNode,分别是nn1,nn2,指定NameService是cluster1时的namenode有哪些,这里的值也是逻辑名称,名字随便起,相互不重复即可 -->  
  </property>  
  
  <property>  
    <name>dfs.namenode.rpc-address.master.nn1</name>  
    <value>master1:9000</value>  
    <!-- nn1的RPC通信地址 -->  
  </property>  
  
  <property>  
    <name>dfs.namenode.rpc-address.master.nn2</name>  
    <value>master2:9000</value>  
    <!-- nn2的RPC通信地址 -->  
  </property>  
  
  <property>  
    <name>dfs.namenode.http-address.master.nn1</name>  
    <value>master1:50070</value>  
    <!-- nn1的http通信地址 -->  
  </property>  
  <property>  
    <name>dfs.namenode.http-address.master.nn2</name>  
    <value>master2:50070</value>  
    <!-- nn2的http通信地址 -->  
  </property>  
  
  <property>  
    <name>dfs.namenode.servicerpc-address.master.nn1</name>  
    <value>master1:53310</value>  
  </property>  
  
  <property>  
    <name>dfs.namenode.servicerpc-address.master.nn2</name>  
    <value>master2:53310</value>  
  </property>  
  
  <property>  
    <name>dfs.namenode.shared.edits.dir</name>  
    <value>qjournal://master1:8485;master2:8485;slaver1:8485/master</value>  
    <!-- 指定NameNode的元数据在JournalNode上的存放位置 -->  
  </property>   
  
  <property>  
    <name>dfs.journalnode.edits.dir</name>  
    <value>/home/zkkafka/hadoop/hdfs/journal</value>  
    <!-- 指定JournalNode在本地磁盘存放数据的位置 -->  
  </property>  
  
  <property>  
    <name>dfs.ha.automatic-failover.enabled</name>    
    <value>true</value>  
    <!-- 开启NameNode失败自动切换 -->  
  </property>  
  
  <property>  
    <name>dfs.client.failover.proxy.provider.master</name>  
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>  
    <!-- 配置失败自动切换实现方式 -->  
  </property>  
  
  <property>  
    <name>dfs.ha.fencing.methods</name>  
    <value>  
      shell(/bin/true)  
    </value>  
    <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->  
  </property>  
  
  <property>  
    <name>dfs.ha.fencing.ssh.private-key-files</name>  
    <value>/root/.ssh/id_rsa</value>  
    <!-- 使用sshfence隔离机制时需要ssh免登陆 -->  
  </property>  
  
  <property>  
    <name>dfs.ha.fencing.ssh.connect-timeout</name>  
    <value>3000</value>  
    <!-- 配置sshfence隔离机制超时时间 -->  
  </property>

</configuration>

 

 

7.5.6 修改 mapred-site.xml 

 

<configuration>
<property>  
    <name>mapreduce.framework.name</name>  
    <value>yarn</value>  
  </property>  
</configuration>

 

 

7.5.7 修改  yarn-site.xml

 

<configuration>
<!-- Site specific YARN configuration properties -->  
  
  <property>  
    <name>yarn.resourcemanager.ha.enabled</name>  
    <value>true</value>  
    <!-- 开启RM高可用 -->  
  </property>  
    
  <property>  
    <!--启动自动故障转移,默认为false-->  
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>  
    <value>true</value>  
  </property>  
  
  <property>  
    <!--启用一个内嵌的故障转移,与ZKRMStateStore一起使用。-->  
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>  
    <value>true</value>  
  </property>  
   
  <property>  
    <name>yarn.resourcemanager.cluster-id</name>  
    <value>yrc</value>  
    <!-- 指定RM的cluster id -->  
  </property>  
  
  <property>  
    <name>yarn.resourcemanager.ha.rm-ids</name>  
    <value>rm1,rm2</value>  
    <!-- 指定RM的名字 -->  
  </property>  
   
  <property>  
    <name>yarn.resourcemanager.hostname.rm1</name>  
    <value>master1</value>  
    <!-- 分别指定RM的地址 -->  
  </property>  
    
  <property>  
    <name>yarn.resourcemanager.hostname.rm2</name>  
    <value>master2</value>  
    <!-- 分别指定RM的地址 -->  
  </property>  
  
  <property>  
    <name>yarn.resourcemanager.ha.id</name>  
    <value>rm1</value>       
    <!--<span style="font-size:18px;color:#ff0000;"><strong>如果是在主NN上 这里写rm1   如果这个配置文件是在备NN上 这里写rm2,否则RM的高可用会出问题</strong></span>-->  
    <description>If we want to launch more than one RM in single node, we need this configuration</description>  
  </property>   
  
  <property>    
    <name>yarn.resourcemanager.recovery.enabled</name>    
    <value>true</value>    
  </property>    
  
  <property>    
    <name>yarn.resourcemanager.store.class</name>    
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>    
  </property>      
  
  <property>  
    <name>yarn.resourcemanager.zk-address</name>  
    <value>master1:2181,master2:2181,slaver1:2181</value>  
    <!-- 指定zk集群地址 -->  
  </property>  
  
  <property>  
    <name>yarn.nodemanager.aux-services</name>  
    <value>mapreduce_shuffle</value>  
  </property>

</configuration>

 

 

7.5.8 复制配置到 其他机器

 

scp etc/hadoop/* zkkafka@10.156.50.36:/home/zkkafka/hadoop/etc/hadoop/
scp etc/hadoop/* zkkafka@10.156.50.37:/home/zkkafka/hadoop/etc/hadoop/

 

 

 

7.5.9 命令

 

cd hadoop 
rm -rf hdfs
mkdir -p tmp hdfs hdfs/datanode hdfs/namenode hdfs/logs hdfs/journal

 

 

 

7.5.9.1 每台机器上启动Zookeeper:bin/zkServer.sh start 

 

7.5.9.2 zookeeper集群格式化(任意一个主节点上执行即可):bin/hdfs zkfc -formatZK 

 

19/05/07 16:14:22 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at hadoop1/10.156.50.35:8020
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:host.name=yanfabu2-35.base.app.dev.yf
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_151
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:java.home=/home/zkkafka/jdk1.8.0_151/jre
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/zkkafka/hadoop/etc/hadoop:/home/zkkafka/hadoop/share/hadoop/common/lib/activation-1.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/xz-1.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/hadoop-annotations-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/curator-framework-2.6.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/junit-4.11.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/hadoop-auth-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/curator-client-2.6.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/zkkafka/hadoop/share/hadoop/common/hadoop-common-2.6.5-tests.jar:/home/zkkafka/hadoop/share/hadoop/common/hadoop-nfs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/common/hadoop-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/hdfs:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.5-tests.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5-tests.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.5.jar:/home/zkkafka/hadoop/contrib/capacity-scheduler/*.jar
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/zkkafka/hadoop/lib/native
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-862.el7.x86_64
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:user.name=zkkafka
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/zkkafka
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/zkkafka/hadoop
19/05/07 16:14:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop1:2181,hadoop1:2181,hadoop1:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@9597028
19/05/07 16:14:22 INFO zookeeper.ClientCnxn: Opening socket connection to server yanfabu2-35.base.app.dev.yf/10.156.50.35:2181. Will not attempt to authenticate using SASL (unknown error)
19/05/07 16:14:22 INFO zookeeper.ClientCnxn: Socket connection established to yanfabu2-35.base.app.dev.yf/10.156.50.35:2181, initiating session
19/05/07 16:14:22 INFO zookeeper.ClientCnxn: Session establishment complete on server yanfabu2-35.base.app.dev.yf/10.156.50.35:2181, sessionid = 0x16a53c49c900013, negotiated timeout = 5000
===============================================
The configured parent znode /hadoop-ha/mycluster already exists.
Are you sure you want to clear all failover information from
ZooKeeper?
WARNING: Before proceeding, ensure that all HDFS services and
failover controllers are stopped!
===============================================
19/05/07 16:14:22 INFO ha.ActiveStandbyElector: Session connected.
Proceed formatting /hadoop-ha/mycluster? (Y or N) Y
19/05/07 16:14:25 INFO ha.ActiveStandbyElector: Recursively deleting /hadoop-ha/mycluster from ZK...
19/05/07 16:14:25 INFO ha.ActiveStandbyElector: Successfully deleted /hadoop-ha/mycluster from ZK.
19/05/07 16:14:25 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
19/05/07 16:14:25 INFO zookeeper.ZooKeeper: Session: 0x16a53c49c900013 closed
19/05/07 16:14:25 INFO zookeeper.ClientCnxn: EventThread shut down

 

 

7.5.9.3 每台机器上启动 journalnode:sbin/hadoop-daemon.sh start journalnode 

(如果这里不启动的话,在进行hdfs格式化的时候就会报错,同时这个进程只需在格式化的时候启动,后续启动服务则不需要)

 

[zkkafka@yanfabu2-35 hadoop]$ sbin/hadoop-daemon.sh start journalnode 
starting journalnode, logging to /home/zkkafka/hadoop/hdfs/logs/hadoop-zkkafka-journalnode-yanfabu2-35.base.app.dev.yf.out

 

 

7.5.9.4 hdfs集群格式化(master1上进行):bin/hadoop namenode -format

 

[zkkafka@yanfabu2-35 hadoop]$ bin/hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

19/05/07 16:16:30 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = yanfabu2-35.base.app.dev.yf/10.156.50.35
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.5
STARTUP_MSG:   classpath = /home/zkkafka/hadoop/etc/hadoop:/home/zkkafka/hadoop/share/hadoop/common/lib/activation-1.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/xz-1.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/hadoop-annotations-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/curator-framework-2.6.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/junit-4.11.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/hadoop-auth-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/curator-client-2.6.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/zkkafka/hadoop/share/hadoop/common/hadoop-common-2.6.5-tests.jar:/home/zkkafka/hadoop/share/hadoop/common/hadoop-nfs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/common/hadoop-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/hdfs:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.5-tests.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5-tests.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.5.jar:/home/zkkafka/hadoop/contrib/capacity-scheduler/*.jar:/home/zkkafka/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997; compiled by 'sjlee' on 2016-10-02T23:43Z
STARTUP_MSG:   java = 1.8.0_151
************************************************************/
19/05/07 16:16:30 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
19/05/07 16:16:31 INFO namenode.NameNode: createNameNode [-format]
19/05/07 16:16:32 WARN common.Util: Path /home/zkkafka/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/05/07 16:16:32 WARN common.Util: Path /home/zkkafka/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-7b379703-6b41-405a-95ed-0ac5a4aa3748
19/05/07 16:16:32 INFO namenode.FSNamesystem: No KeyProvider found.
19/05/07 16:16:32 INFO namenode.FSNamesystem: fsLock is fair:true
19/05/07 16:16:32 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
19/05/07 16:16:32 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
19/05/07 16:16:32 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19/05/07 16:16:32 INFO blockmanagement.BlockManager: The block deletion will start around 2019 五月 07 16:16:32
19/05/07 16:16:32 INFO util.GSet: Computing capacity for map BlocksMap
19/05/07 16:16:32 INFO util.GSet: VM type       = 64-bit
19/05/07 16:16:32 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
19/05/07 16:16:32 INFO util.GSet: capacity      = 2^21 = 2097152 entries
19/05/07 16:16:32 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
19/05/07 16:16:32 INFO blockmanagement.BlockManager: defaultReplication         = 2
19/05/07 16:16:32 INFO blockmanagement.BlockManager: maxReplication             = 512
19/05/07 16:16:32 INFO blockmanagement.BlockManager: minReplication             = 1
19/05/07 16:16:32 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
19/05/07 16:16:32 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
19/05/07 16:16:32 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
19/05/07 16:16:32 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
19/05/07 16:16:32 INFO namenode.FSNamesystem: fsOwner             = zkkafka (auth:SIMPLE)
19/05/07 16:16:32 INFO namenode.FSNamesystem: supergroup          = supergroup
19/05/07 16:16:32 INFO namenode.FSNamesystem: isPermissionEnabled = false
19/05/07 16:16:32 INFO namenode.FSNamesystem: Determined nameservice ID: mycluster
19/05/07 16:16:32 INFO namenode.FSNamesystem: HA Enabled: true
19/05/07 16:16:32 INFO namenode.FSNamesystem: Append Enabled: true
19/05/07 16:16:32 INFO util.GSet: Computing capacity for map INodeMap
19/05/07 16:16:32 INFO util.GSet: VM type       = 64-bit
19/05/07 16:16:32 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
19/05/07 16:16:32 INFO util.GSet: capacity      = 2^20 = 1048576 entries
19/05/07 16:16:32 INFO namenode.NameNode: Caching file names occuring more than 10 times
19/05/07 16:16:32 INFO util.GSet: Computing capacity for map cachedBlocks
19/05/07 16:16:32 INFO util.GSet: VM type       = 64-bit
19/05/07 16:16:32 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
19/05/07 16:16:32 INFO util.GSet: capacity      = 2^18 = 262144 entries
19/05/07 16:16:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
19/05/07 16:16:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
19/05/07 16:16:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
19/05/07 16:16:32 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
19/05/07 16:16:32 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19/05/07 16:16:32 INFO util.GSet: Computing capacity for map NameNodeRetryCache
19/05/07 16:16:32 INFO util.GSet: VM type       = 64-bit
19/05/07 16:16:32 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
19/05/07 16:16:32 INFO util.GSet: capacity      = 2^15 = 32768 entries
19/05/07 16:16:32 INFO namenode.NNConf: ACLs enabled? false
19/05/07 16:16:32 INFO namenode.NNConf: XAttrs enabled? true
19/05/07 16:16:32 INFO namenode.NNConf: Maximum size of an xattr: 16384
Re-format filesystem in Storage Directory /home/zkkafka/hadoop/hdfs/namenode ? (Y or N) Y
Re-format filesystem in QJM to [10.156.50.35:8485, 10.156.50.36:8485, 10.156.50.37:8485] ? (Y or N) Y
19/05/07 16:16:43 INFO namenode.FSImage: Allocated new BlockPoolId: BP-307190116-10.156.50.35-1557217003315
19/05/07 16:16:43 INFO common.Storage: Storage directory /home/zkkafka/hadoop/hdfs/namenode has been successfully formatted.
19/05/07 16:16:43 INFO namenode.FSImageFormatProtobuf: Saving image file /home/zkkafka/hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 using no compression
19/05/07 16:16:43 INFO namenode.FSImageFormatProtobuf: Image file /home/zkkafka/hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 of size 324 bytes saved in 0 seconds.
19/05/07 16:16:44 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
19/05/07 16:16:44 INFO util.ExitUtil: Exiting with status 0
19/05/07 16:16:44 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at yanfabu2-35.base.app.dev.yf/10.156.50.35
************************************************************/

 

 

 

7.5.9.5   master1机器上启动服务:sbin/start-dfs.sh      sbin/start-yarn.sh

 

[zkkafka@yanfabu2-35 hadoop]$ sbin/start-dfs.sh
Starting namenodes on [master1 master2]
master2: starting namenode, logging to /home/zkkafka/hadoop/hdfs/logs/hadoop-zkkafka-namenode-yanfabu2-36.base.app.dev.yf.out
master1: starting namenode, logging to /home/zkkafka/hadoop/hdfs/logs/hadoop-zkkafka-namenode-yanfabu2-35.base.app.dev.yf.out
hadoop3: starting datanode, logging to /home/zkkafka/hadoop/hdfs/logs/hadoop-zkkafka-datanode-yanfabu2-37.base.app.dev.yf.out
Starting journal nodes [master1 master2 slaver1]
master1: journalnode running as process 83576. Stop it first.
slaver1: journalnode running as process 60920. Stop it first.
master2: journalnode running as process 54881. Stop it first.
Starting ZK Failover Controllers on NN hosts [master1 master2]
master2: starting zkfc, logging to /home/zkkafka/hadoop/hdfs/logs/hadoop-zkkafka-zkfc-yanfabu2-36.base.app.dev.yf.out
master1: starting zkfc, logging to /home/zkkafka/hadoop/hdfs/logs/hadoop-zkkafka-zkfc-yanfabu2-35.base.app.dev.yf.out

 

 

 

[zkkafka@yanfabu2-35 hadoop]$  sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/zkkafka/hadoop/logs/yarn-zkkafka-resourcemanager-yanfabu2-35.base.app.dev.yf.out
hadoop3: starting nodemanager, logging to /home/zkkafka/hadoop/logs/yarn-zkkafka-nodemanager-yanfabu2-37.base.app.dev.yf.out

 

 

 

[zkkafka@yanfabu2-35 hadoop]$ jps
59330 QuorumPeerMain
84210 ResourceManager
84101 DFSZKFailoverController
83780 NameNode
56377 Kafka
83576 JournalNode
84282 Jps
[zkkafka@yanfabu2-35 hadoop]$ 


[zkkafka@yanfabu2-36 hadoop]$ jps
54881 JournalNode
37365 QuorumPeerMain
55096 Jps
34571 Kafka
55051 DFSZKFailoverController
[zkkafka@yanfabu2-36 hadoop]$ 

[zkkafka@yanfabu2-37 hadoop]$ jps
60994 DataNode
61252 Jps
60920 JournalNode
61130 NodeManager
42955 QuorumPeerMain
40189 Kafka
[zkkafka@yanfabu2-37 hadoop]$ 

 

 

 

7.5.9.6  备用NN同步主NN的元数据信息(master2上执行): bin/hdfs namenode -bootstrapStandby

 

[zkkafka@yanfabu2-36 hadoop]$ bin/hdfs namenode -bootstrapStandby
19/05/08 13:25:58 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = yanfabu2-36.base.app.dev.yf/10.156.50.36
STARTUP_MSG:   args = [-bootstrapStandby]
STARTUP_MSG:   version = 2.6.5
STARTUP_MSG:   classpath = /home/zkkafka/hadoop/etc/hadoop:/home/zkkafka/hadoop/share/hadoop/common/lib/activation-1.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/xz-1.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/hadoop-annotations-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/curator-framework-2.6.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/junit-4.11.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/hadoop-auth-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/curator-client-2.6.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/zkkafka/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/zkkafka/hadoop/share/hadoop/common/hadoop-common-2.6.5-tests.jar:/home/zkkafka/hadoop/share/hadoop/common/hadoop-nfs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/common/hadoop-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/hdfs:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.5-tests.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/zkkafka/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5-tests.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.5.jar:/home/zkkafka/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.5.jar:/home/zkkafka/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997; compiled by 'sjlee' on 2016-10-02T23:43Z
STARTUP_MSG:   java = 1.8.0_151
************************************************************/
19/05/08 13:25:58 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
19/05/08 13:25:58 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
19/05/08 13:25:59 WARN common.Util: Path /home/zkkafka/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/05/08 13:25:59 WARN common.Util: Path /home/zkkafka/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
=====================================================
About to bootstrap Standby ID nn2 from:
           Nameservice ID: master
        Other Namenode ID: nn1
  Other NN's HTTP address: http://master1:50070
  Other NN's IPC  address: master1/10.156.50.35:53310
             Namespace ID: 383293971
            Block pool ID: BP-145210633-10.156.50.35-1557287718153
               Cluster ID: CID-a2af7cbb-ba2c-4043-8e54-a9cd6c457ab6
           Layout version: -60
       isUpgradeFinalized: true
=====================================================
19/05/08 13:25:59 INFO common.Storage: Storage directory /home/zkkafka/hadoop/hdfs/namenode has been successfully formatted.
19/05/08 13:25:59 WARN common.Util: Path /home/zkkafka/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/05/08 13:25:59 WARN common.Util: Path /home/zkkafka/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/05/08 13:26:00 INFO namenode.TransferFsImage: Opening connection to http://master1:50070/imagetransfer?getimage=1&txid=0&storageInfo=-60:383293971:0:CID-a2af7cbb-ba2c-4043-8e54-a9cd6c457ab6
19/05/08 13:26:00 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
19/05/08 13:26:00 INFO namenode.TransferFsImage: Transfer took 0.00s at 0.00 KB/s
19/05/08 13:26:00 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 324 bytes.
19/05/08 13:26:00 INFO util.ExitUtil: Exiting with status 0
19/05/08 13:26:00 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at yanfabu2-36.base.app.dev.yf/10.156.50.36
************************************************************/

 

 

 

7.5.9.7  启动备用NN(master2上执行): sbin/hadoop-daemon.sh start namenode

 

[zkkafka@yanfabu2-36 hadoop]$ sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /home/zkkafka/hadoop/hdfs/logs/hadoop-zkkafka-namenode-yanfabu2-36.base.app.dev.yf.out
[zkkafka@yanfabu2-36 hadoop]$ 

 

 

7.5.9.8 web ui 查看

http://10.156.50.35:8088/cluster/nodes
http://10.156.50.35:50070/
http://10.156.50.36:50070/

 



 

 

 

 

 

 7.5.9.9 测试

kill 掉  master1 的namenode ,会发现 master2 的 namenode 自动从standby 变为active 

 

7.5.9.10 测试hadoop mapreduce

[zkkafka@yanfabu2-35 hadoop]$  hdfs dfs -ls /  
[zkkafka@yanfabu2-35 hadoop]$ ls
bin  etc  hdfs  include  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share  tmp
[zkkafka@yanfabu2-35 hadoop]$ vi a.txt 
[zkkafka@yanfabu2-35 hadoop]$ hdfs dfs -put a.txt / 
[zkkafka@yanfabu2-35 hadoop]$  hdfs dfs -ls /  
Found 1 items
-rw-r--r--   2 zkkafka supergroup         67 2019-05-08 15:11 /a.txt
[zkkafka@yanfabu2-35 hadoop]$ cd /usr/local/hadoop/share/hadoop/mapreduce
-bash: cd: /usr/local/hadoop/share/hadoop/mapreduce: 没有那个文件或目录
[zkkafka@yanfabu2-35 hadoop]$ ls
a.txt  bin  etc  hdfs  include  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share  tmp
[zkkafka@yanfabu2-35 hadoop]$ cd share/hadoop/mapreduce/
[zkkafka@yanfabu2-35 mapreduce]$ ls
hadoop-mapreduce-client-app-2.6.5.jar     hadoop-mapreduce-client-hs-2.6.5.jar          hadoop-mapreduce-client-jobclient-2.6.5-tests.jar  lib
hadoop-mapreduce-client-common-2.6.5.jar  hadoop-mapreduce-client-hs-plugins-2.6.5.jar  hadoop-mapreduce-client-shuffle-2.6.5.jar          lib-examples
hadoop-mapreduce-client-core-2.6.5.jar    hadoop-mapreduce-client-jobclient-2.6.5.jar   hadoop-mapreduce-examples-2.6.5.jar                sources
[zkkafka@yanfabu2-35 mapreduce]$ hadoop jar  hadoop-mapreduce-examples-2.6.5.jar  wordcount /a.txt /out
19/05/08 15:12:50 INFO input.FileInputFormat: Total input paths to process : 1
19/05/08 15:12:51 INFO mapreduce.JobSubmitter: number of splits:1
19/05/08 15:12:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1557299349575_0001
19/05/08 15:12:53 INFO impl.YarnClientImpl: Submitted application application_1557299349575_0001
19/05/08 15:12:53 INFO mapreduce.Job: The url to track the job: http://master1:8088/proxy/application_1557299349575_0001/
19/05/08 15:12:53 INFO mapreduce.Job: Running job: job_1557299349575_0001
19/05/08 15:13:03 INFO mapreduce.Job: Job job_1557299349575_0001 running in uber mode : false
19/05/08 15:13:03 INFO mapreduce.Job:  map 0% reduce 0%
19/05/08 15:13:10 INFO mapreduce.Job:  map 100% reduce 0%
19/05/08 15:13:18 INFO mapreduce.Job:  map 100% reduce 100%
19/05/08 15:13:19 INFO mapreduce.Job: Job job_1557299349575_0001 completed successfully
19/05/08 15:13:19 INFO mapreduce.Job: Counters: 49
	File System Counters
		FILE: Number of bytes read=79
		FILE: Number of bytes written=220503
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=151
		HDFS: Number of bytes written=45
		HDFS: Number of read operations=6
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=1
		Launched reduce tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=3409
		Total time spent by all reduces in occupied slots (ms)=5527
		Total time spent by all map tasks (ms)=3409
		Total time spent by all reduce tasks (ms)=5527
		Total vcore-milliseconds taken by all map tasks=3409
		Total vcore-milliseconds taken by all reduce tasks=5527
		Total megabyte-milliseconds taken by all map tasks=3490816
		Total megabyte-milliseconds taken by all reduce tasks=5659648
	Map-Reduce Framework
		Map input records=9
		Map output records=15
		Map output bytes=126
		Map output materialized bytes=79
		Input split bytes=84
		Combine input records=15
		Combine output records=7
		Reduce input groups=7
		Reduce shuffle bytes=79
		Reduce input records=7
		Reduce output records=7
		Spilled Records=14
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=180
		CPU time spent (ms)=2130
		Physical memory (bytes) snapshot=401084416
		Virtual memory (bytes) snapshot=4201586688
		Total committed heap usage (bytes)=289931264
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=67
	File Output Format Counters 
		Bytes Written=45
[zkkafka@yanfabu2-35 mapreduce]$  hdfs dfs -text /out/part-r-00000 
a	1
aaaa	1
bao	4
baoyou	2
byou	1
ccc	1
you	5
[zkkafka@yanfabu2-35 mapreduce]$ 

 

 

7.5.9.11 删除主节点后 测试 hadoop mapreduce 命令

 

[zkkafka@yanfabu2-35 mapreduce]$ jps
59330 QuorumPeerMain
56377 Kafka
86248 NameNode
86680 ResourceManager
86570 DFSZKFailoverController
86044 JournalNode
87180 Jps
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ kill -9 86248
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ jps
59330 QuorumPeerMain
87193 Jps
56377 Kafka
86680 ResourceManager
86570 DFSZKFailoverController
86044 JournalNode
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ 
[zkkafka@yanfabu2-35 mapreduce]$ hadoop jar  hadoop-mapreduce-examples-2.6.5.jar  wordcount /a.txt /out2
19/05/08 15:18:00 INFO input.FileInputFormat: Total input paths to process : 1
19/05/08 15:18:00 INFO mapreduce.JobSubmitter: number of splits:1
19/05/08 15:18:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1557299349575_0002
19/05/08 15:18:01 INFO impl.YarnClientImpl: Submitted application application_1557299349575_0002
19/05/08 15:18:01 INFO mapreduce.Job: The url to track the job: http://master1:8088/proxy/application_1557299349575_0002/
19/05/08 15:18:01 INFO mapreduce.Job: Running job: job_1557299349575_0002
19/05/08 15:18:09 INFO mapreduce.Job: Job job_1557299349575_0002 running in uber mode : false
19/05/08 15:18:09 INFO mapreduce.Job:  map 0% reduce 0%
19/05/08 15:18:15 INFO mapreduce.Job:  map 100% reduce 0%
19/05/08 15:18:21 INFO mapreduce.Job:  map 100% reduce 100%
19/05/08 15:18:22 INFO mapreduce.Job: Job job_1557299349575_0002 completed successfully
19/05/08 15:18:22 INFO mapreduce.Job: Counters: 49
	File System Counters
		FILE: Number of bytes read=79
		FILE: Number of bytes written=220505
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=151
		HDFS: Number of bytes written=45
		HDFS: Number of read operations=6
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=1
		Launched reduce tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=3567
		Total time spent by all reduces in occupied slots (ms)=3580
		Total time spent by all map tasks (ms)=3567
		Total time spent by all reduce tasks (ms)=3580
		Total vcore-milliseconds taken by all map tasks=3567
		Total vcore-milliseconds taken by all reduce tasks=3580
		Total megabyte-milliseconds taken by all map tasks=3652608
		Total megabyte-milliseconds taken by all reduce tasks=3665920
	Map-Reduce Framework
		Map input records=9
		Map output records=15
		Map output bytes=126
		Map output materialized bytes=79
		Input split bytes=84
		Combine input records=15
		Combine output records=7
		Reduce input groups=7
		Reduce shuffle bytes=79
		Reduce input records=7
		Reduce output records=7
		Spilled Records=14
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=162
		CPU time spent (ms)=2100
		Physical memory (bytes) snapshot=410075136
		Virtual memory (bytes) snapshot=4200316928
		Total committed heap usage (bytes)=299892736
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=67
	File Output Format Counters 
		Bytes Written=45
[zkkafka@yanfabu2-35 mapreduce]$  hdfs dfs -text /out2/part-r-00000 
a	1
aaaa	1
bao	4
baoyou	2
byou	1
ccc	1
you	5
[zkkafka@yanfabu2-35 mapreduce]$ 

 

 

 删除主节点后 ,备份主节点依然恢复 active ,执行 mapreduce

 

 

 

 

 

 

 

 

 

 

 

 

捐助开发者 

在兴趣的驱动下,写一个免费的东西,有欣喜,也还有汗水,希望你喜欢我的作品,同时也能支持一下。 当然,有钱捧个钱场(支持支付宝和微信 以及扣扣群),没钱捧个人场,谢谢各位。

 

个人主页http://knight-black-bob.iteye.com/



 
 
 谢谢您的赞助,我会做的更好!

  • 大小: 86.2 KB
  • 大小: 101.2 KB
  • 大小: 79.3 KB
0
0
分享到:
评论

相关推荐

    Hadoop-ha集群搭建

    HadoopHA集群搭建描述及指令,里面有各种注意事项。 集群部署节点角色的规划(3节点) ------------------ server01 namenode resourcemanager zkfc nodemanager datanode zookeeper journal node server02 ...

    hadoop-mapreduce-client-jobclient-2.6.5-API文档-中文版.zip

    赠送jar包:hadoop-mapreduce-client-jobclient-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-jobclient-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-jobclient-2.6.5-sources.jar; 赠送...

    08-Hadoop-HA.pdf

    Hadoop-HA思维导图,便捷整理思路,实操Hadoop-HA、ResourceManager-HA、Yarn、RS-HA故障转移

    hadoop-fuse-dfs安装.docx

    CDH hadoop-fuse-dfs的安装指导,是我在工作过程中安装步鄹的总结。

    hadoop最新版本3.1.1全量jar包

    hadoop-annotations-3.1.1.jar hadoop-common-3.1.1.jar hadoop-mapreduce-client-core-3.1.1.jar hadoop-yarn-api-3.1.1.jar hadoop-auth-3.1.1.jar hadoop-hdfs-3.1.1.jar hadoop-mapreduce-client-hs-3.1.1.jar ...

    hadoop-lzo-master

    1.2 mv hadoop-gpl-compression-0.1.0/lib/native/Linux-amd64-64/* $HADOOP_HOME/lib/native/Linux-amd64-64/ 1.3 cp hadoop-gpl-compression-0.1.0/hadoop-gpl-compression-0.1.0.jar /usr/local/hadoop-1.0.2/...

    hadoop-eclipse-plugin-2.7.3和2.7.7

    hadoop-eclipse-plugin-2.7.3和2.7.7的jar包 hadoop-eclipse-plugin-2.7.3和2.7.7的jar包 hadoop-eclipse-plugin-2.7.3和2.7.7的jar包 hadoop-eclipse-plugin-2.7.3和2.7.7的jar包

    hadoop-eclipse-plugin-3.1.1.tar.gz

    hadoop-eclipse-plugin-3.1.1, hadoop eclipse 插件 3.1.1

    hadoop-eclipse-plugin-1.2.1.jar有用的

    该资源包里面包含eclipse上的hadoop-1.2.1版本插件的jar包和hadoop-1.2.1.tar.gz,亲测可用~~请在下载完该包后解压,将hadoop-1.2.1...preferences ,browser选择D:\hadoop-eclipse,配置Hadoop MapReduce的安装路径。

    好用hadoop-eclipse-plugin-1.2.1

    hadoop-eclipse-plugin-1.2.1hadoop-eclipse-plugin-1.2.1hadoop-eclipse-plugin-1.2.1hadoop-eclipse-plugin-1.2.1

    hadoop-eclipse-plugin-2.10.0.jar

    Eclipse集成Hadoop2.10.0的插件,使用`ant`对hadoop的jar包进行打包并适应Eclipse加载,所以参数里有hadoop和eclipse的目录. 必须注意对于不同的hadoop版本,` HADDOP_INSTALL_PATH/share/hadoop/common/lib`下的jar包...

    hadoop-common-2.7.3-API文档-中文版.zip

    赠送jar包:hadoop-common-2.7.3.jar; 赠送原API文档:hadoop-common-2.7.3-javadoc.jar; 赠送源代码:hadoop-common-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-common-2.7.3.pom; 包含翻译后的API文档...

    hadoop-2.4.1\share\hadoop\common\hadoop-common-2.4.1.jar

    hadoop-common-2.4.1.jar,是学习基础的Hadoop必须的包

    hadoop-eclipse-plugin-3.1.3.jar

    hadoop-eclipse-plugin-3.1.3,eclipse版本为eclipse-jee-2020-03

    hadoop-eclipse-plugin三个版本的插件都在这里了。

    hadoop-eclipse-plugin-2.7.4.jar和hadoop-eclipse-plugin-2.7.3.jar还有hadoop-eclipse-plugin-2.6.0.jar的插件都在这打包了,都可以用。

    flink-shaded-hadoop-3下载

    flink-shaded-hadoop-3下载

    hadoop-3.3.4 版本(最新版)

    Apache Hadoop (hadoop-3.3.4.tar.gz)项目为可靠、可扩展的分布式计算开发开源软件。官网下载速度非常缓慢,因此将hadoop-3.3.4 版本放在这里,欢迎大家来下载使用! Hadoop 架构是一个开源的、基于 Java 的编程...

    hadoop-lzo-0.4.20.jar

    hadoop2 lzo 文件 ,编译好的64位 hadoop-lzo-0.4.20.jar 文件 ,在mac 系统下编译的,用法:解压后把hadoop-lzo-0.4.20.jar 放到你的hadoop 安装路径下的lib 下,把里面lib/Mac_OS_X-x86_64-64 下的所有文件 拷到 ...

    hadoop-yarn-server-resourcemanager-2.6.0-API文档-中文版.zip

    赠送jar包:hadoop-yarn-server-resourcemanager-2.6.0.jar; 赠送原API文档:hadoop-yarn-server-resourcemanager-2.6.0-javadoc.jar; 赠送源代码:hadoop-yarn-server-resourcemanager-2.6.0-sources.jar; 赠送...

    hadoop-eclipse-plugin-2.9.2

    找不到与hadoop-2.9.2版本对应的插件,手动生成的hadoop-eclipse-plugin-2.9.2版本,

Global site tag (gtag.js) - Google Analytics