0 0

hadoop异常5


===========jobstarttime:2014-09–25 15:13:13
14/09/25 15:13:17 INFO client.RMProxy: Connecting to ResourceManager at ddp-nn-002/10.5.25.3:8032
14/09/25 15:13:17 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 688459 for tag_bonc on ha-hdfs:ns1
14/09/25 15:13:17 INFO security.TokenCache: Got dt for hdfs://ns1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns1, Ident: (HDFS_DELEGATION_TOKEN token 688459 for tag_bonc)
14/09/25 15:13:18 INFO input.FileInputFormat: Total input paths to process : 1429
14/09/25 15:13:18 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
14/09/25 15:13:18 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev 8e266e052e423af592871e2dfe09d54c03f6a0e8]
14/09/25 15:13:20 INFO mapreduce.JobSubmitter: number of splits:7244
14/09/25 15:13:20 INFO Configuration.deprecation: mapred.job.queue.name is deprecated. Instead, use mapreduce.job.queuename
14/09/25 15:13:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1409126717340_61477
14/09/25 15:13:20 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns1, Ident: (HDFS_DELEGATION_TOKEN token 688459 for tag_bonc)
14/09/25 15:13:21 INFO impl.YarnClientImpl: Submitted application application_1409126717340_61477
14/09/25 15:13:21 INFO mapreduce.Job: The url to track the job: http://DDP-NN-002:23188/proxy/application_1409126717340_61477/
14/09/25 15:13:21 INFO mapreduce.Job: Running job: job_1409126717340_61477
14/09/25 15:13:30 INFO mapreduce.Job: Job job_1409126717340_61477 running in uber mode : false
14/09/25 15:13:30 INFO mapreduce.Job:  map 0% reduce 0%
14/09/25 15:14:29 INFO mapreduce.Job: Task Id : attempt_1409126717340_61477_m_000111_0, Status : FAILED
Error: java.io.IOException: Spill failed
	at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException(MapTask.java:1535)
	at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1444)
	at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: failed to create file /user/tag_bonc/private/gdpi/tag/20140923/_temporary/1/_temporary/attempt_1409126717340_61477_m_000111_0/match/ordinal-m-00111.gz on client 10.5.25.86 because the file exists
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2270)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2198)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2151)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:505)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1603)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1465)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1390)
	at org.apache.hadoop.hdfs.DistributedFileSystem$9.doCall(DistributedFileSystem.java:631)
	at org.apache.hadoop.hdfs.DistributedFileSystem$9.doCall(DistributedFileSystem.java:627)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:627)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)
	at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:135)
	at org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat$LazyRecordWriter.write(LazyOutputFormat.java:113)
	at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.write(MultipleOutputs.java:457)
	at com.bonc.mapred.tool.BrandurlTool$DataCleanReducer.reduce(BrandurlTool.java:998)
	at com.bonc.mapred.tool.BrandurlTool$DataCleanReducer.reduce(BrandurlTool.java:1)
	at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
	at org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1645)
	at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1611)
	at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$900(MapTask.java:853)
	at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1505)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException): failed to create file /user/tag_bonc/private/gdpi/tag/20140923/_temporary/1/_temporary/attempt_1409126717340_61477_m_000111_0/match/ordinal-m-00111.gz on client 10.5.25.86 because the file exists
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2270)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2198)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2151)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:505)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)

	at org.apache.hadoop.ipc.Client.call(Client.java:1409)
	at org.apache.hadoop.ipc.Client.call(Client.java:1362)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
	at com.sun.proxy.$Proxy10.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:258)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy11.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1599)
	... 20 more

。。。。。反复的上面的错错误......
14/09/25 15:17:49 INFO mapreduce.Job:  map 100% reduce 100%
14/09/25 15:18:12 INFO mapreduce.Job: Job job_1409126717340_61477 failed with state FAILED due to: Task failed task_1409126717340_61477_m_000213
Job failed as tasks failed. failedMaps:1 failedReduces:0

14/09/25 15:18:12 INFO mapreduce.Job: Counters: 14
	Job Counters 
		Failed map tasks=1171
		Killed map tasks=480
		Launched map tasks=1651
		Other local map tasks=1107
		Data-local map tasks=644
		Rack-local map tasks=3
		Total time spent by all maps in occupied slots (ms)=470559560
		Total time spent by all reduces in occupied slots (ms)=0
		Total time spent by all map tasks (ms)=117639890
		Total vcore-seconds taken by all map tasks=117639890
		Total megabyte-seconds taken by all map tasks=481852989440
	Map-Reduce Framework
		CPU time spent (ms)=0
		Physical memory (bytes) snapshot=0
		Virtual memory (bytes) snapshot=0
===========jobendtime:2014-09–25 15:18:12

 

 


问题补充:这个问题肿么解决啊,急求
2014年9月25日 18:21
目前还没有答案

相关推荐

    Hadoop-NativeIO.java

    解决本地调试Hadoop 异常。 org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

    Hadoop常见异常

    Hadoop常见异常,以及hadoop配置,等资料

    云计算Hadoop平台的异常数据检测算法研究.pdf

    云计算Hadoop平台的异常数据检测算法研究.pdf

    hadoop安装以及常见异常处理.docx

    Hadoop安装及常见异常处理,记录了在Hadoop安装中可能出现的几类常见异常及其解决方案

    hadoop.dllV2.6

    下载hadoop.dll放到HADOOP_HOME的bin目录下和windows系统的c:/window/system32/ 注意如果这个hadoop.dll的版本要和hadoop的一致,可以稍微高一点,低了可能就会报异常

    Windows + Hadoop2.7 + winutils.exe + hadoop.dll

    Windows 搭建 hadoop2.7 环境,必备 winutils.exe + hadoop.dll,否则不能正常使用 hadoop.dll防止报nativeio异常、winutils.exe没有的话报空指针异常

    hadoop_hadoop-2.7.2-hbase-jar.rar linux下包

    hadoop_hadoop-2.7.2-hbase-jar.rar hadoop_hadoop-2.7.2-hbase-jar.rar

    一种基于Hadoop的集群资源访问异常检测方法

    针对分布式平台资源利用的特征,在集群局部资源利用密度异常情况检测技术的基础上,引入...最后通过某高校基于Hadoop服务器集群采集到的数据进行异常检测,验证了提出的基于最近邻结点资源异常检测方法的准确率和可行性。

    hadoop-3.1.0-winUtils.rar

    如果本机操作系统是 Windows,在程序中使用了 Hadoop 相关的东西,比如写入文件到HDFS,则会遇到如下异常:could not locate executable null\bin\winutils.exe ,使用这个包,设置一个 HADOOP_HOME 环境变量,即可...

    hadoop-core-1.2.0(解决0700异常)

    eclipse连接远程hadoop集群开发时0700问题解决方案。修改源码,重新编译后hadoop-core-1.2.0

    hadoop-common-2.2.0-bin-master.zip

    hadoop-common-2.2.0-bin-master(包含windows端开发Hadoop和Spark需要的winutils.exe),Windows下IDEA开发...则会遇到异常, 把此文件解压放置在任意目录下,然后在环境变量中配置 系统变量,命名为HADOOP_HOME 即可。

    hadoop-common.zip

    windows 再idea 或者 eclipse中使用hadoop 、 spark 时出现异常, 解压之后放在相关目录使用

    hadoop-2.6.3.rar

    如果本机操作系统是 Windows,在程序中使用了 Hadoop 相关的东西,比如写入文件到HDFS,则会遇到如下异常:could not locate executable null\bin\winutils.exe ,使用这个包,设置一个 HADOOP_HOME 环境变量,即可...

    hadoop-3.0.0-winUtils.rar

    如果本机操作系统是 Windows,在程序中使用了 Hadoop 相关的东西,比如写入文件到HDFS,则会遇到如下异常:could not locate executable null\bin\winutils.exe ,使用这个包,设置一个 HADOOP_HOME 环境变量,即可...

    Hadoop2.6.0eclipse插件及winutils.exe及hadoop.dll打包

    hadoop 2.6.0版本去官网就可以,这里把windows搭建环境需要的hadoop2.6.0 eclipse插件,...linux只要插件就可以了,windows(测试的win10),需要另外两个文件放置在hadoop/bin及system32目录下,否则报空指针异常

    Hadoop实战手册

    启动执行和异常检查17#通过界面查看集群部署部署成功18#通过执行 Hadoop pi 运行样例检查集群是否成功19#安装部署 常见错误207. Hadoop 集群系统 配置安装配置20#检查node节点linux 基础环境是否正常,参考 [ linux ...

    hadoop.dll 文件

    windows本地运行mr程序时(不提交到yarn,运行在jvm靠线程执行),hadoop.dll防止报nativeio异常、winutils.exe没有的话报空指针异常。

    2.7.2 Hadoop本地库64位

    针对执行hadoop中hdfs命令报错异常,fail to load native-hadoop的编译过的64位hadoop本地库,替换lib/native即可

    hadoop-2.8.1-winUtils.rar

    如果本机操作系统是 Windows,在程序中使用了 Hadoop 相关的东西,比如写入文件到HDFS,则会遇到如下异常:could not locate executable null\bin\winutils.exe ,使用这个包,设置一个 HADOOP_HOME 环境变量,即可...

    window 本地执行 hadoop 缺失的hadoop.dll 和 winutils.exe 文件

    用来解决如下异常: Caused by: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. at org.apache.hadoop.util.Shell.checkHadoopHomeInner(Shell.java:468) at org.apache.hadoop....

Global site tag (gtag.js) - Google Analytics