`
m635674608
  • 浏览: 4930954 次
  • 性别: Icon_minigender_1
  • 来自: 南京
社区版块
存档分类
最新评论

hadoop集群之HDFS和YARN启动和停止命令

 
阅读更多

假如我们只有3台linux虚拟机,主机名分别为hadoop01、hadoop02和hadoop03,在这3台机器上,hadoop集群的部署情况如下:

hadoop01:1个namenode,1个datanode,1个journalnode,1个zkfc,1个resourcemanager,1个nodemanager;

hadoop02:1个namenode,1个datanode,1个journalnode,1个zkfc,1个resourcemanager,1个nodemanager;

hadoop03:1个datenode,1个journalnode,1个nodemanager;

 

下面我们来介绍启动hdfs和yarn的一些命令。

 

1.启动hdfs集群(使用hadoop的批量启动脚本)

/root/apps/hadoop/sbin/start-dfs.sh
复制代码
[root@hadoop01 ~]# /root/apps/hadoop/sbin/start-dfs.sh 
Starting namenodes on [hadoop01 hadoop02]
hadoop01: starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop01.out
hadoop02: starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop02.out
hadoop03: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop03.out
hadoop02: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop02.out
hadoop01: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop01.out
Starting journal nodes [hadoop01 hadoop02 hadoop03]
hadoop03: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop03.out
hadoop02: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop02.out
hadoop01: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop01.out
Starting ZK Failover Controllers on NN hosts [hadoop01 hadoop02]
hadoop01: starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop01.out
hadoop02: starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop02.out
[root@hadoop01 ~]# 
复制代码

从上面的启动日志可以看出,start-dfs.sh这个启动脚本是通过ssh对多个节点的namenode、datanode、journalnode以及zkfc进程进行批量启动的。

 

2.停止hdfs集群(使用hadoop的批量启动脚本)

/root/apps/hadoop/sbin/stop-dfs.sh 
复制代码
[root@hadoop01 ~]# /root/apps/hadoop/sbin/stop-dfs.sh 
Stopping namenodes on [hadoop01 hadoop02]
hadoop02: stopping namenode
hadoop01: stopping namenode
hadoop02: stopping datanode
hadoop03: stopping datanode
hadoop01: stopping datanode
Stopping journal nodes [hadoop01 hadoop02 hadoop03]
hadoop03: stopping journalnode
hadoop02: stopping journalnode
hadoop01: stopping journalnode
Stopping ZK Failover Controllers on NN hosts [hadoop01 hadoop02]
hadoop01: stopping zkfc
hadoop02: stopping zkfc
[root@hadoop01 ~]# 
复制代码

3.启动单个进程

[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop02.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop02.out
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop03.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop02.out
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop03.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start zkfc
starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start zkfc
starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop02.out

 分别查看启动后3台虚拟机上的进程情况:

复制代码
[root@hadoop01 ~]# jps
6695 DataNode
2002 QuorumPeerMain
6879 DFSZKFailoverController
7035 Jps
6800 JournalNode
6580 NameNode
[root@hadoop01 ~]# 
复制代码

 

复制代码
[root@hadoop02 ~]# jps
6360 JournalNode
6436 DFSZKFailoverController
2130 QuorumPeerMain
6541 Jps
6255 DataNode
6155 NameNode
[root@hadoop02 ~]# 
复制代码

 

[root@hadoop03 apps]# jps
5331 Jps
5103 DataNode
5204 JournalNode
2258 QuorumPeerMain
[root@hadoop03 apps]# 

 

3.停止单个进程

复制代码
[root@hadoop01 ~]# jps
6695 DataNode
2002 QuorumPeerMain
8486 Jps
6879 DFSZKFailoverController
6800 JournalNode
6580 NameNode
[root@hadoop01 ~]# 
[root@hadoop01 ~]# 
[root@hadoop01 ~]# 
[root@hadoop01 ~]# 
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop zkfc
stopping zkfc
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode
stopping journalnode
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode
stopping datanode
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop namenode
stopping namenode
[root@hadoop01 ~]# jps
2002 QuorumPeerMain
8572 Jps
[root@hadoop01 ~]# 
复制代码

 

复制代码
[root@hadoop02 ~]# jps
6360 JournalNode
6436 DFSZKFailoverController
2130 QuorumPeerMain
7378 Jps
6255 DataNode
6155 NameNode
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop zkfc
stopping zkfc
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode
stopping journalnode
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode
stopping datanode
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop namenode
stopping namenode
[root@hadoop02 ~]# jps
7455 Jps
2130 QuorumPeerMain
[root@hadoop02 ~]# 
复制代码

 

复制代码
[root@hadoop03 apps]# jps
5103 DataNode
5204 JournalNode
5774 Jps
2258 QuorumPeerMain
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode
stopping journalnode
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode
stopping datanode
[root@hadoop03 apps]# jps
5818 Jps
2258 QuorumPeerMain
[root@hadoop03 apps]# 
复制代码

 

 

3.启动yarn集群(使用hadoop的批量启动脚本)

/root/apps/hadoop/sbin/start-yarn.sh 

 

复制代码
[root@hadoop01 ~]# /root/apps/hadoop/sbin/start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /root/apps/hadoop/logs/yarn-root-resourcemanager-hadoop01.out
hadoop03: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop03.out
hadoop02: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop02.out
hadoop01: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop01.out
[root@hadoop01 ~]# 
复制代码

 

从上面的启动日志可以看出,start-yarn.sh启动脚本只在本地启动一个ResourceManager进程,而3台机器上的nodemanager都是通过ssh的方式启动的。所以hadoop02机器上的ResourceManager需要我们手动去启动。

4.启动hadoop02上的ResourceManager进程

/root/apps/hadoop/sbin/yarn-daemon.sh start resourcemanager

 

 

 5.停止yarn

/root/apps/hadoop/sbin/stop-yarn.sh
复制代码
[root@hadoop01 ~]# /root/apps/hadoop/sbin/stop-yarn.sh 
stopping yarn daemons
stopping resourcemanager
hadoop01: stopping nodemanager
hadoop03: stopping nodemanager
hadoop02: stopping nodemanager
no proxyserver to stop
[root@hadoop01 ~]# 
复制代码

 

 通过上面的停止日志可以看出,stop-yarn.sh脚本只停止了本地的那个ResourceManager进程,所以hadoop02上的那个resourcemanager我们需要单独去停止。

 

6.停止hadoop02上的resourcemanager

/root/apps/hadoop/sbin/yarn-daemon.sh stop resourcemanager

 

 

注意:启动和停止单个hdfs相关的进程使用的是"hadoop-daemon.sh"脚本,而启动和停止yarn使用的是"yarn-daemon.sh"脚本。

 

http://www.cnblogs.com/jun1019/p/6266615.html

分享到:
评论

相关推荐

    1-1-HDFS+and+YARN.pdf

    Hadoop集群具体来说包含两个集群:HDFS集群和YARN集群,两者逻辑上分离,但物理上常在一起。 (1)HDFS集群:负责海量数据的存储,集群中的角色主要有 NameNode / DataNode/SecondaryNameNode。 (2)YARN集群:负责...

    【自动化脚本】自动启动hdfs/yarn/spark HA集群

    脚本功能:启动集群 前提:配置好执行脚本的主机到其他主机的ssh登录 脚本使用:vim编辑脚本,按照自己的配置修改主机号,我的是hadoop1、2是NN;hadoop2、3是Spark Master;hadoop3还是RM;hadoop4、5、6是DN、NM、...

    hadoop2.x集群搭建.txt(hdfs和yarn貌似正常,但mapreduce 提交job执行失败,请看我的另一个资源,另一个搭建是成功的)

    hadoop 搭建过程

    HadoopHA集群配置文件

    Hadoop HA 集群搭建所需要的配置文件:core-site,hdfs-site,mapred-site,yarn-site四个xml文件和一个slaves文件

    CentOS7下Hadoop3.2.1集群的安装与部署(下)

    在《CentOS7下Hadoop3.2.1集群的安装与部署(上)》中我们我们完成了对Hadoop集群的安装与启动。接下来,重点对HDFS和Yarn的HA配置进行介绍。 HDFS高可用 在上述Hadoop集群搭建完成之后,若要启用HA还需要对hdfs-...

    HadoopHA集群部署、YARNHA测试Job教学课件.pptx

    YARN HA 测试Job YARN HA 测试Job 序号 任务名称 任务一 准备MapReduce输入文件 任务二 将输入文件上传到HDFS 任务三 运行MapReduce程序测试Job 任务一 准备MapReduce输入文件 在master主节点,使用 root 用户登录,...

    Hadoop 2.6.4 呕心沥血的笔记,HDFS集群及Yarn的搭建

    Hadoop 2.6.4 呕心沥血的笔记,HDFS集群及Yarn的搭建,按照我的笔记一步一步的执行,可以很快的搭建出来大数据平台,适合初接触的同学们奥。。。。。

    Hadoop集群搭建-完全分布式

    site.xml(hadoop核心配置)hdfs-site.xml(分布式文件系统HDFS相关配置)mapred-site.xml(MapReduce相关配置)yarn-site.xml(Yarn相关配置)slaves文件(里面写从节点所在的主机名,会在这些主机上启动DataNode)...

    Hadoop2.6集群环境搭建(HDFS HA+YARN)

    1、笔记本4G内存 ,操作系统WIN7 (屌丝的配置) 2、工具VMware Workstation 3、虚拟机:CentOS6.4共四台 每台机器:内存512M,硬盘40G,网络适配器:NAT模式 边看边操作 (本人原创)

    Hadoop 分布式集群搭建_部分2.docx

    用户可以轻松地在Hadoop集群上开发和运行处理海量数据的应用程序。Hadoop有高可靠,高扩展,高效性,高容错等优点。Hadoop 框架最核心的设计就是HDFS和MapReduce。HDFS为海量的数据提供了存储,MapReduce为海量的...

    hadoop2.2.0部署

    3.1 hdfs和yarn单机安装 17 3.1.1 配置主机和防火墙 17 3.2 hadoop基本shell命令 19 3.3 简单JAVA实例 20 4 伪分布式部署spark 20 4.1 下载spark 20 4.2 解压安装 20 4.3 安装scala. 20 4.4 配置spark的启动参数 21 ...

    搭建Hadoop集群

    HADOOP集群具体来说包含两个集群:HDFS集群和YARN集群,两者逻辑上分离,但物理上常在一起。 HDFS集群:负责海量数据的存储,集群中的角色主要有 NameNode / DataNode。 YARN集群:负责海量数据运算时的资源调度,...

    大数据运维技术第5章 Hadoop集群运行课件.pptx

    Hadoop运行状态;;5.1 Hadoop运行状态;Hadoop运行状态;...确保dfs和yarn都启动成功 [hadoop@master hadoop]$ start-yarn.sh [hadoop@master hadoop]$ jps 34257 NameNode 34449 SecondaryNameNode 34494 Jp

    Hadoop之详解HDFS架构

    MapReduce——分布式计算框架,运行于YARN之上这篇文章主要是对Hadoop三大基本组件之一的HDFS进行深入的学习。随着数据量越来越大,在一一个操作系统存不下所有的数据,那么就分配到更多的操作系统管理的磁盘中,...

    apache-ranger-1.2.0.tar Hadoop集群权限框架

    Apache Ranger 支持以下HDP组件的验证、授权、审计、数据加密、安全管理: Apache Hadoop HDFS Apache Hive Apache HBase Apache Storm Apache Knox Apache Solr Apache Kafka YARN

    Hadoop大数据期末考试重点

    Hadoop大数据期末考试重点,选择、判断、简答

    hadoop高可用搭建

    启动命令(hdfs和yarn的相关命令) HA的切换 效果截图  下面我们给出下载包的链接地址:  zookeeper下载地址  hadoop2.x下载地址 JDK下载地址  注:若JDK无法下载,请到Oracle的官网下载JDK。  到这里安装包都...

    hadoop集群配置模板

    hadoop高可用配置模板,重点为hdfs,含yarn配置,支持高可用,3节点配置,基于hadoop2.92

    hadoop段海涛老师八天实战视频

    第一天 hadoop的基本概念 伪分布式hadoop集群安装 hdfs mapreduce 演示 01-hadoop职位需求状况.avi 02-hadoop课程安排.avi 03-hadoop应用场景.avi 04-hadoop对海量数据处理的解决思路.avi 05-hadoop版本选择和...

Global site tag (gtag.js) - Google Analytics