fs.default.name
To run HDFS, you need to designate (指派)one machine as a namenode. In this case, the
property fs.default.name is a HDFS filesystem URI, whose host is the namenode’s
hostname or IP address, and port is the port that the namenode will listen on for RPCs.
If no port is specified, the default of 8020 is used.
The fs.default.name property also doubles as specifying the default filesystem. The
default filesystem is used to resolve relative paths, which are handy (有用)to use since they
save typing (and avoid hardcoding knowledge of a particular namenode’s address). For
example, with the default filesystem defined in Example 9-1, the relative URI /a/b is
resolved to hdfs://namenode/a/b.
2 dfs.name.dir
here are a few other configuration properties you should set for HDFS: those that set
the storage directories for the namenode and for datanodes. The property
dfs.name.dir specifies a list of directories where the namenode stores persistent file-
system metadata (the edit log, and the filesystem image). A copy of each of the metadata
files is stored in each directory for redundancy( 冗余,即namenode在 dfs.name.dir 每一 项位置中存的数据都是一样的 ) .
It’s common to configure dfs.name.dir so that the namenode metadata is written to one or two local disks , and
a remote disk , such as a NFS-mounted directory. Such a setup guards against failure
of a local disk, and failure of the entire namenode, since in both cases the files can be
recovered and used to start a new namenode. (The secondary namenode takes only
periodic checkpoints of the namenode, so it does not provide an up-to-date backup of
the namenode.)
3 dfs.data.dir
You should also set the dfs.data.dir property, which specifies a list of directorie s for
a datanode to store its blocks. Unlike the namenode, which uses multiple directories
for redundancy(冗余), a datanode round-robins(轮循, datanode 在 dfs.data.dir 每一 项位置中存的数据是不一样的 ) . ) writes between its storage directories, so for
performance you should specify a storage directory for each local disk. Read perform-
ance also benefits from having multiple disks for storage, because blocks will be spread
across them, and concurrent reads for distinct blocks will be correspondingly spread
across disks.
4 fs.checkpoint.dir
Finally, you should configure where the secondary namenode stores its checkpoints of
the filesystem. The fs.checkpoint.dir property specifies a list of directories where the
checkpoints are kept. Like the storage directories for the namenode, which keep re-
dundant copies of the namenode metadata, the checkpointed filesystem image is stored
in each checkpoint directory for redundancy.
Note that the storage directories for HDFS are under Hadoop’s tempo-
rary directory by default (the hadoop.tmp.dir property, whose default
is /tmp/hadoop-${user.name}). Therefore it is critical that these proper-
ties are set so that data is not lost by the system clearing out temporary
directories.
分享到:
相关推荐
在启动hadoop后,查看jps时看不到应该启动起来的东西 其中一个问题是报but there is no HDFS_NAMENODE_USER defined....HDFS_DATANODE_USER=root HADOOP_SECURE_DN_USER=hdfs HDFS_NAMENODE_USER=root HDFS_SEC
HDFS体系结构(NameNode、DataNode详解)
NameNode职责
HDFS的概念-namenode和datanode.pdf 学习资料 复习资料 教学资源
HDFS NameNode DataNode DataNode SecondaryNameNode DataNode YARN NodeManager ResourceManager NodeManager NodeManager 3. 配置集群 (1)核心配置文件 配置core-site.xml (2)HDFS配置文件 配置 hadoop-env...
HDFS NameNode DataNode DataNode SecondaryNameNode DataNode YARN NodeManager ResourceManager NodeManager NodeManager 3. 配置集群 (1)核心配置文件 配置core-site.xml (2)HDFS配置文件 配置 hadoop-env...
HDFS NameNode DataNode DataNode SecondaryNameNode DataNode YARN NodeManager ResourceManager NodeManager NodeManager 3. 配置集群 (1)核心配置文件 配置core-site.xml (2)HDFS配置文件 配置 hadoop-env...
HDFS NameNode DataNode DataNode SecondaryNameNode DataNode YARN NodeManager ResourceManager NodeManager NodeManager 3. 配置集群 (1)核心配置文件 配置core-site.xml (2)HDFS配置文件 配置 hadoop-env...
大家都知道HDFS的架构由NameNode,SecondaryNameNode和DataNodes组成,其源码类图如下图所示:正如上图所示,NameNode和DataNode继承了很多的protocol用于彼此间的通信,其实nameNode还实现了...实现了ClientProtocol...
角色变量hdfs_version - HDFS 版本hdfs_cloudera_distribution - Cloudera 发行版(默认: cdh5.4 ) hdfs_conf_dir - HDFS 的配置目录(默认: /etc/hadoop/conf ) hdfs_namenode - 确定节点是否为 HDFS NameNode ...
在 docker 容器中运行 hdfs 数据节点 暴露端口 TCP 50010 dfs.datanode.address 数据传输端口 TCP 50020 dfs.datanode.ipc.address ipc 服务器 ...docker run -d --link namenode:namenode hauptmedia/hdfs-datanode
一、 HDFS前言 设计思想 分而治之:将大文件、大批量文件,分布式存放在大量服务器上,以便于采取分而治之的方式对海量数据进行运算分析...HDFS中的文件在物理上是分块存储(block),块的大小可以通过配置参数( df
HDFS DataNode定义的存储目录不正确或HDFS的存储规划变化时,需要修改DataNode的存储目录,以保障HDFS的正常工作,假定我们现在对应的HDFS数据盘位置为:/hadoop/hdfs/data; 预将数据目录迁移至/data/hadoop/hdfs/...
Java操作Hdfs,配置开发环境,NameNode详解,DataNode详解,namenode与datanode的工作机制
HDFS本身是一个典型的主从结构,主要进程:NameNode(主进程),DataNode(从进程)。其中NameNode作为主进程,负责管理DataNode以及存储元数据(metadata),DataNode则主要负责存储数据。 在HDFS中,数据存储的基本单位...
第1章 HDFS 1 1.1 HDFS概述 1 1.1.1 HDFS体系结构 1 1.1.2 HDFS基本概念 2 1.2 HDFS通信协议 4 1.2.1 Hadoop RPC接口 4 1.2.2 流式接口 20 1.3 HDFS主要流程 22 1.3.1 HDFS客户端读流程 22 1.3.2 ...
《Hadoop 2.X HDFS源码剖析》以Hadoop 2.6.0源码为基础,深入剖析了HDFS 2.X中各个模块的实现细节,包括RPC框架实现、Namenode实现、Datanode实现以及HDFS客户端实现等。《Hadoop 2.X HDFS源码剖析》一共有5章,其中...
《Hadoop 2.X HDFS源码剖析》以Hadoop 2.6.0源码为基础,深入剖析了HDFS 2.X中各个模块的实现细节,包括RPC框架实现、Namenode实现、Datanode实现以及HDFS客户端实现等。《Hadoop 2.X HDFS源码剖析》一共有5章,其中...
HDFS的架构是较为经典的主/从架构,在架构图中NameNode是主节点,DataNode是从节点,HDFS Client是客户端、HDFS提供了比较丰富的客户端像cli、api、gui等等支持,SecondaryNameNode作为辅助NameNode工作的一个辅助...
1. 启 动 全 分 布 模 式 Hadoop 集 群 , 守护进程 包 括 NameNode 、 DataNode 、 SecondaryNameNode、ResourceManager、NodeManager 和 JobHistoryServer。 2. 查看 HDFS Web 界面。 3. 练习 HDFS Shell 文件系统...