`

Hadoop:Error reading task outputhttp

 
阅读更多
java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
11/03/15 12:54:09 WARN mapred.JobClient: Error reading task outputhttp:.....

该问题主要由hadoop的logs目录无法继续写入新文件造成,清理master和slaver上的logs文件夹后再次下发任务,正常运行。

这是老外的解释:
Apparently, it's an OS limit on the number of sub-directories that can be reated in another directory.  In this case, we had 31998 sub-directories uder hadoop/userlogs/, so any new tasks would fail in Job Setup.

From the unix command line, mkdir fails as well:
  $ mkdir hadoop/userlogs/testdir
  mkdir: cannot create directory `hadoop/userlogs/testdir': Too many links

Difficult to track down because the Hadoop error message gives no hint whasoever.  And normally, you'd look in the userlog itself for more info, butin this case the userlog couldn't be created.
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics