`
bupt04406
  • 浏览: 343618 次
  • 性别: Icon_minigender_1
  • 来自: 杭州
社区版块
存档分类
最新评论

Too many fetch failures

阅读更多

 

http://lucene.472066.n3.nabble.com/Reg-Too-many-fetch-failures-Error-td4037975.html

 

http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera

 

 

 

 http://lucene.472066.n3.nabble.com/Reg-Too-many-fetch-failures-Error-td4037975.html

 

As you may be aware this means the reduces are unable to fetch intermediate data from TaskTrackers that ran map tasks – you can try:

* increasing tasktracker.http.threads so there are more threads to handle fetch requests from reduces.

* decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches are performed in parallel

 

It could also be due to a temporary DNS issue.

 

 

cdh3u4:

TaskTracker里面: 

workerThreads = conf.getInt("tasktracker.http.threads", 40);

 

ReduceTask里面: 

this.numCopiers = conf.getInt("mapred.reduce.parallel.copies", 5);

 

 

 http://lucene.472066.n3.nabble.com/Error-Too-Many-Fetch-Failures-td3990324.html

 

 $ cat /proc/sys/net/core/somaxconn 

1024

$ ulimit -n 

131072

 

 

 

 

 

  • 大小: 63 KB
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics