我在本地搭建了一个hadoop的伪分布式环境,在本地put文件到hdfs的时候发生异常。
hadoop fs -put hello.log /hello/201803201140/
异常信息:
There are 0 datanode(s) running and no node(s) are excluded in this operation.
查看DataNode的日志文件,有如下的异常信息:
2018-11-21 14:49:15,524 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/Users/kpx/Datas/hadoop/hdfs/tmp/dfs/data
java.io.IOException: Incompatible clusterIDs in /Users/kpx/Datas/hadoop/hdfs/tmp/dfs/data: namenode clusterID = CID-8d444e87-7d47-497d-b92a-83a15c2f025d; datanode clusterID = CID-206e5c4d-31bf-40e7-ad76-4ecf4bb2fa5c
java.io.IOException: Incompatible clusterIDs in /Users/kpx/Datas/hadoop/hdfs/tmp/dfs/data: namenode clusterID = CID-8d444e87-7d47-497d-b92a-83a15c2f025d; datanode clusterID = CID-206e5c4d-31bf-40e7-ad76-4ecf4bb2fa5c
表名DataNode有问题,然后使用 jps 命令查看java的检查,发现没有DataNode被启动:
进程情况: 写道
58400 ResourceManager
58499 NodeManager
58206 SecondaryNameNode
57967 NameNode
58499 NodeManager
58206 SecondaryNameNode
57967 NameNode
突然意识到可能是之前我反反复复搭建hadoop的过程中几次中断过程、几次format namenode等乱七八糟的操作引起的DataNode的文件异常。
查看 core-site.xml 配置文件里面的 <name>hadoop.tmp.dir</name> ,进入该目录下面的 dfs 目录,如下:
data
name
namesecondary
name
namesecondary
这3个目录下面的内容都删除,然后重新运行namenode的格式化:
hdfs namenode -format
重新运行 put 命令上传文件,成功!
相关推荐
Hadoop datanode启动失败:Hadoop安装目录权限的问题
Hadoop常见异常,以及hadoop配置,等资料
赠送jar包:hadoop-mapreduce-client-common-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-common-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-common-2.6.5-sources.jar; 赠送Maven依赖信息...
For the latest information about Hadoop, please visit our website at: http://hadoop.apache.org/ and our wiki, at: http://wiki.apache.org/hadoop/ This distribution includes cryptographic software. ...
A number of organizations are focusing on big data processing, particularly with Hadoop. This course will help you understand how Hadoop, as an ecosystem, helps us store, process, and analyze data. ...
hadoop权威指南代码 (Hadoop: The Definitive Guide code) http://www.hadoopbook.com
This book jumps into the world of Hadoop ecosystem components and its tools in a simplified manner, and provides you with the skills to utilize them effectively for faster and effective development of...
Data-intensive Systems: Principles and Fundamentals using Hadoop and Spark (Advanced Information and Knowledge Processing) By 作者: Tomasz – Wiktorski – Tomasz Wiktorski ISBN-10 书号: 3030046028 ...
windows安装需要的Hadoop库文件,windows安装需要的Hadoop库文件.
1. Hadoop 2.0 2. 部署在2个Ubuntu上 3. 2个namenode 2个datanode
自己编译的,亲测在windows 10 64位系统下,Hadoop 2.7.6可用,
在网上搜集的以及本人自己总结的hadoop集群常见问题及解决办法,融合了网上常常搜到的一些文档以及个人自己的经验。
hadoop的默认配置文件,下载记得关注我哦
Hadoop本地调试必要的文件 winutil.exe & hadoop.dll 使用方式:https://blog.csdn.net/qq_31772441/article/details/84076715
介绍容器化hadoop的方案,hadoop on kubernetes的产品实践。
hadoop 权威指南 第三版:英文版,当前没有找到中文版本,下载了英文版并在此共享。
hadoop 2.6.5 windows本地调试 所需文件 hadoop.dll winutils.exe,下载之后解压,配置环境变量 HADOOP_HOME , %HADOOP_HOME%\bin , %HADOOP_HOME%\sbin, 并把 hadoop.dll 复制一份到C:\Windows\System32
There are new chapters covering YARN (Chapter 4), Parquet (Chapter 13), Flume (Chapter 14), Crunch (Chapter 18), and Spark (Chapter 19). There’s also a new section to help readers navigate different ...
Win本地执行Hadoop所需文件,本人博客(邵奈一CSDN)有配套教程:Windows本地安装Hadoop,请自行搜索,谢谢。
Wangda Tan and Wei-Chiu Chuang the current status of Apache Hadoop 3.x—how it’s used today in deployments large and small, and they dive into the exciting present and future of Hadoop 3.x—features ...