- 浏览: 300693 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (165)
- hadoop (47)
- linux (11)
- nutch (7)
- hbase (7)
- solr (4)
- zookeeper (4)
- J2EE (1)
- jquery (3)
- java (17)
- mysql (14)
- perl (2)
- compass (4)
- suse (2)
- memcache (1)
- as (1)
- roller (1)
- web (7)
- MongoDB (8)
- struts2 (3)
- lucene (2)
- 算法 (4)
- 中文分词 (3)
- hive (17)
- noIT (1)
- 中间件 (2)
- maven (2)
- sd (0)
- php (2)
- asdf (0)
- kerberos 安装 (1)
- git (1)
- osgi (1)
- impala (1)
- book (1)
- python 安装 科学计算包 (1)
最新评论
-
dandongsoft:
你写的不好用啊
solr 同义词搜索 -
黎明lm:
meifangzi 写道楼主真厉害 都分析源码了 用了很久. ...
hadoop 源码分析(二) jobClient 通过RPC 代理提交作业到JobTracker -
meifangzi:
楼主真厉害 都分析源码了
hadoop 源码分析(二) jobClient 通过RPC 代理提交作业到JobTracker -
zhdkn:
顶一个,最近也在学习设计模式,发现一个问题,如果老是看别人的博 ...
Java观察者模式(Observer)详解及应用 -
lvwenwen:
木南飘香 写道
高并发网站的架构
nutch1.3 +hadoop 分布式部署(亲测)
1.确保hadoop正常启动
2.下载nutch1.3 安装包 解压到指定路径
3.抓取
nutch1.3 有两个conf 一个在NUTCH_HOME/conf ,另一个在rumtime/local/conf
runtime/local/conf 为 local(本地抓取的配置文件所用)
NUTCH_HOME/conf 为分布式抓取所用
下面我们着重讲解 分布式抓取
4.分布式抓取:
rutime/deply/bin/nutch下执行分布式抓取命令(分布式抓取一定是在这个下面,local为本地抓取所用)
chmod +x bin/nutch 赋予执行权限
5.拷贝hadoop环境
将HADOOP_HOME/conf下的 6个文件:
core-site.xml
hadoop-env.sh
hdfs-site.xml
mapred-site.xml
masters
slaves
拷贝到NUTCH_HOME/conf下
6.配置nutch-site.xml
简单配置一个http.agent.name 即可
<property>
<name>http.agent.name</name>
<value>MyCrawl001</value>
</property>
7.配置regex-urlfilter.txt抓取动态网页
# skip file: ftp: and mailto: urls
-^(file|ftp|mailto):
# skip image and other suffixes we can't yet parse
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$
# skip URLs containing certain characters as probable queries, etc.
+[?*!@=]
# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
-.*(/[^/]+)/[^/]+\1/[^/]+\1/
# accept anything else
+.
8.native/lib
这是配置nutch1.3 hadoop集群最重要一点
下面是NUTCH_HOME/lib/native下的 文档README.txt
These libraries are purely optional, and if they are missing Hadoop will
use corresponding pure Java components. The impact of native compression
becomes noticeable with larger datasets and weaker CPU-s - if you notice
that the CPU is routinely saturated when a job is sorting or reducing,
then using these libs may help.
Installation instructions
=========================
You can obtain the necessary files from a distribution package of Hadoop,
e.g. hadoop-0.20.2.tar.gz. Unpack this archive, and copy the content of
lib/native here, so that the layout looks like this:
<Nutch home>/lib/native/Linux-amd64-64/...
<Nutch home>/lib/native/Linux-i386-32/...
Local runtime
-------------
The build process will include these native libraries when preparing
the /runtime/local environment for running in local mode.
/runtime/local/bin/nutch knows how to use these libs - if they are
found and correctly used you should see lines like this in your logs:
Distributed runtime
-------------------
If you want to use this component in an existing Hadoop cluster (when using
/runtime/deploy artifacts) you need to make sure these files are placed in
Hadoop/lib/native directory on each node, and then restart the cluster. If
you installed the cluster from a distribution package of Hadoop then these
libraries should already be in the right place and you shouldn't need to do
anything else.
~
大体意思就是说 可以将 HADOOP_HOME下的 lib/native中的文件
Linux-amd64-64
Linux-i386-32
拷贝到NUTCH_HOME/lib/native下(按英文原意要确保you need to make sure these files are placed in
Hadoop/lib/native directory on each node, and then restart the cluster 确保这个文件在每一个节点上 并且重启集群,我拷贝了,)
9.执行
runtime/deply/bin/nutch hdfs://server0:9000/user/suse/urls -dir crawl -depth 200 -threads 200 -topN 1000
10.成功
看到map-reduce 任务成功执行 则配置成功
11/08/22 16:33:26 INFO mapred.JobClient: Reduce input records=48148
11/08/22 16:33:26 INFO crawl.CrawlDb: CrawlDb update: finished at 2011-08-22 16:33:26, elapsed: 00:00:39
11/08/22 16:33:26 INFO crawl.Generator: Generator: starting at 2011-08-22 16:33:26
11/08/22 16:33:26 INFO crawl.Generator: Generator: Selecting best-scoring urls due for fetch.
11/08/22 16:33:26 INFO crawl.Generator: Generator: filtering: true
11/08/22 16:33:26 INFO crawl.Generator: Generator: normalizing: true
11/08/22 16:33:26 INFO crawl.Generator: Generator: topN: 1000
11/08/22 16:33:27 INFO mapred.FileInputFormat: Total input paths to process : 8
11/08/22 16:33:28 INFO mapred.JobClient: Running job: job_201108221601_0022
11/08/22 16:33:29 INFO mapred.JobClient: map 0% reduce 0%
11/08/22 16:33:36 INFO mapred.JobClient: map 25% reduce 0%
11/08/22 16:33:39 INFO mapred.JobClient: map 50% reduce 0%
11/08/22 16:33:42 INFO mapred.JobClient: map 75% reduce 0%
11/08/22 16:33:45 INFO mapred.JobClient: map 75% reduce 4%
11/08/22 16:33:46 INFO mapred.JobClient: map 100% reduce 4%
11/08/22 16:33:48 INFO mapred.JobClient: map 100% reduce 13%
11/08/22 16:33:49 INFO mapred.JobClient: map 100% reduce 19%
11/08/22 16:33:51 INFO mapred.JobClient: map 100% reduce 33%
11/08/22 16:33:54 INFO mapred.JobClient: map 100% reduce 53%
11/08/22 16:33:57 INFO mapred.JobClient: map 100% reduce 71%
11/08/22 16:33:58 INFO mapred.JobClient: map 100% reduce 90%
11/08/22 16:34:00 INFO mapred.JobClient: map 100% reduce 100%
11/08/22 16:34:02 INFO mapred.JobClient: Job complete: job_201108221601_0022
11.solr 索引
nutch1.3下只有三个文件夹
crawldb
linkdb
segments
必须要用solr去建立索引
12.遇到错误
question:
Exception in thread "main" java.lang.IllegalArgumentException: Fetcher: No agents listed in 'http.agent.name' property.
at org.apache.nutch.fetcher.Fetcher.checkConfiguration(Fetcher.java:1166)
at org.apache.nutch.fetcher.Fetcher.fetch(Fetcher.java:1068)
at org.apache.nutch.crawl.Crawl.run(Crawl.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
answer:
using 1.3? If so make sure you changed nutch-site.xml (and not default)
in runtime/local/conf Changing the conf in NUTCH_HOME/conf won't be copied
to the runtime dirs unless you rebuild with ant.
BTW why don't you ask on the mailing list instead? You are more likely to get some help there
这个是英文元版回答。主要意思就是 nutch1.3 修改完nutch-site.xml 要重新ant一下
question:
Caused by: java.lang.NullPointerException
at java.io.Reader.<init>(Reader.java:61)
at java.io.BufferedReader.<init>(BufferedReader.java:76)
at java.io.BufferedReader.<init>(BufferedReader.java:91)
at org.apache.nutch.urlfilter.api.RegexURLFilterBase.readRules(RegexURLFilterBase.java:180)
at org.apache.nutch.urlfilter.api.RegexURLFilterBase.setConf(RegexURLFilterBase.java:156)
at org.apache.nutch.plugin.Extension.getExtensionInstance(Extension.java:162)
at org.apache.nutch.net.URLFilters.<init>(URLFilters.java:57)
at org.apache.nutch.crawl.Injector$InjectMapper.configure(Injector.java:72)
... 18 more
answer:
在runtime/local/下执行分布式抓取则报这个错误
注意:分布式抓取一定是在runtime/deply下执行
bin/nutch 命令
urls是HDFS上url地址的文件夹
hdfs://202.193.58.99:8888/urls 这个应该是文件而不是文件夹,你直接用
bin/hadoop fs -ls 查看下你的urls文件在哪里 目录是什么 然后直接写上全路径或是相对路径 (根据实际情况)
完整路径:hdfs://202.193.58.99:8888/urls/url
错误还是一模一样。我好想在哪儿看过,这里用文件夹和集体的文件 都可以!
hdfs://202.193.58.99:8888/urls 这个应该是文件而不是文件夹,你直接用
bin/hadoop fs -ls 查看下你的urls文件在哪里 目录是什么 然后直接写上全路径或是相对路径 (根据实际情况)
可以多贴点错误日志?
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at org.apache.nutch.crawl.Injector.inject(Injector.java:217)
at org.apache.nutch.crawl.Crawl.run(Crawl.java:127)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
可以多贴点错误日志?
1.确保hadoop正常启动
2.下载nutch1.3 安装包 解压到指定路径
3.抓取
nutch1.3 有两个conf 一个在NUTCH_HOME/conf ,另一个在rumtime/local/conf
runtime/local/conf 为 local(本地抓取的配置文件所用)
NUTCH_HOME/conf 为分布式抓取所用
下面我们着重讲解 分布式抓取
4.分布式抓取:
rutime/deply/bin/nutch下执行分布式抓取命令(分布式抓取一定是在这个下面,local为本地抓取所用)
chmod +x bin/nutch 赋予执行权限
5.拷贝hadoop环境
将HADOOP_HOME/conf下的 6个文件:
core-site.xml
hadoop-env.sh
hdfs-site.xml
mapred-site.xml
masters
slaves
拷贝到NUTCH_HOME/conf下
6.配置nutch-site.xml
简单配置一个http.agent.name 即可
<property>
<name>http.agent.name</name>
<value>MyCrawl001</value>
</property>
7.配置regex-urlfilter.txt抓取动态网页
# skip file: ftp: and mailto: urls
-^(file|ftp|mailto):
# skip image and other suffixes we can't yet parse
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$
# skip URLs containing certain characters as probable queries, etc.
+[?*!@=]
# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
-.*(/[^/]+)/[^/]+\1/[^/]+\1/
# accept anything else
+.
8.native/lib
这是配置nutch1.3 hadoop集群最重要一点
下面是NUTCH_HOME/lib/native下的 文档README.txt
These libraries are purely optional, and if they are missing Hadoop will
use corresponding pure Java components. The impact of native compression
becomes noticeable with larger datasets and weaker CPU-s - if you notice
that the CPU is routinely saturated when a job is sorting or reducing,
then using these libs may help.
Installation instructions
=========================
You can obtain the necessary files from a distribution package of Hadoop,
e.g. hadoop-0.20.2.tar.gz. Unpack this archive, and copy the content of
lib/native here, so that the layout looks like this:
<Nutch home>/lib/native/Linux-amd64-64/...
<Nutch home>/lib/native/Linux-i386-32/...
Local runtime
-------------
The build process will include these native libraries when preparing
the /runtime/local environment for running in local mode.
/runtime/local/bin/nutch knows how to use these libs - if they are
found and correctly used you should see lines like this in your logs:
Distributed runtime
-------------------
If you want to use this component in an existing Hadoop cluster (when using
/runtime/deploy artifacts) you need to make sure these files are placed in
Hadoop/lib/native directory on each node, and then restart the cluster. If
you installed the cluster from a distribution package of Hadoop then these
libraries should already be in the right place and you shouldn't need to do
anything else.
~
大体意思就是说 可以将 HADOOP_HOME下的 lib/native中的文件
Linux-amd64-64
Linux-i386-32
拷贝到NUTCH_HOME/lib/native下(按英文原意要确保you need to make sure these files are placed in
Hadoop/lib/native directory on each node, and then restart the cluster 确保这个文件在每一个节点上 并且重启集群,我拷贝了,)
9.执行
runtime/deply/bin/nutch hdfs://server0:9000/user/suse/urls -dir crawl -depth 200 -threads 200 -topN 1000
10.成功
看到map-reduce 任务成功执行 则配置成功
11/08/22 16:33:26 INFO mapred.JobClient: Reduce input records=48148
11/08/22 16:33:26 INFO crawl.CrawlDb: CrawlDb update: finished at 2011-08-22 16:33:26, elapsed: 00:00:39
11/08/22 16:33:26 INFO crawl.Generator: Generator: starting at 2011-08-22 16:33:26
11/08/22 16:33:26 INFO crawl.Generator: Generator: Selecting best-scoring urls due for fetch.
11/08/22 16:33:26 INFO crawl.Generator: Generator: filtering: true
11/08/22 16:33:26 INFO crawl.Generator: Generator: normalizing: true
11/08/22 16:33:26 INFO crawl.Generator: Generator: topN: 1000
11/08/22 16:33:27 INFO mapred.FileInputFormat: Total input paths to process : 8
11/08/22 16:33:28 INFO mapred.JobClient: Running job: job_201108221601_0022
11/08/22 16:33:29 INFO mapred.JobClient: map 0% reduce 0%
11/08/22 16:33:36 INFO mapred.JobClient: map 25% reduce 0%
11/08/22 16:33:39 INFO mapred.JobClient: map 50% reduce 0%
11/08/22 16:33:42 INFO mapred.JobClient: map 75% reduce 0%
11/08/22 16:33:45 INFO mapred.JobClient: map 75% reduce 4%
11/08/22 16:33:46 INFO mapred.JobClient: map 100% reduce 4%
11/08/22 16:33:48 INFO mapred.JobClient: map 100% reduce 13%
11/08/22 16:33:49 INFO mapred.JobClient: map 100% reduce 19%
11/08/22 16:33:51 INFO mapred.JobClient: map 100% reduce 33%
11/08/22 16:33:54 INFO mapred.JobClient: map 100% reduce 53%
11/08/22 16:33:57 INFO mapred.JobClient: map 100% reduce 71%
11/08/22 16:33:58 INFO mapred.JobClient: map 100% reduce 90%
11/08/22 16:34:00 INFO mapred.JobClient: map 100% reduce 100%
11/08/22 16:34:02 INFO mapred.JobClient: Job complete: job_201108221601_0022
11.solr 索引
nutch1.3下只有三个文件夹
crawldb
linkdb
segments
必须要用solr去建立索引
12.遇到错误
question:
Exception in thread "main" java.lang.IllegalArgumentException: Fetcher: No agents listed in 'http.agent.name' property.
at org.apache.nutch.fetcher.Fetcher.checkConfiguration(Fetcher.java:1166)
at org.apache.nutch.fetcher.Fetcher.fetch(Fetcher.java:1068)
at org.apache.nutch.crawl.Crawl.run(Crawl.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
answer:
using 1.3? If so make sure you changed nutch-site.xml (and not default)
in runtime/local/conf Changing the conf in NUTCH_HOME/conf won't be copied
to the runtime dirs unless you rebuild with ant.
BTW why don't you ask on the mailing list instead? You are more likely to get some help there
这个是英文元版回答。主要意思就是 nutch1.3 修改完nutch-site.xml 要重新ant一下
question:
Caused by: java.lang.NullPointerException
at java.io.Reader.<init>(Reader.java:61)
at java.io.BufferedReader.<init>(BufferedReader.java:76)
at java.io.BufferedReader.<init>(BufferedReader.java:91)
at org.apache.nutch.urlfilter.api.RegexURLFilterBase.readRules(RegexURLFilterBase.java:180)
at org.apache.nutch.urlfilter.api.RegexURLFilterBase.setConf(RegexURLFilterBase.java:156)
at org.apache.nutch.plugin.Extension.getExtensionInstance(Extension.java:162)
at org.apache.nutch.net.URLFilters.<init>(URLFilters.java:57)
at org.apache.nutch.crawl.Injector$InjectMapper.configure(Injector.java:72)
... 18 more
answer:
在runtime/local/下执行分布式抓取则报这个错误
注意:分布式抓取一定是在runtime/deply下执行
bin/nutch 命令
评论
8 楼
schaha123
2012-02-24
urls是HDFS上url地址的文件夹
hdfs://202.193.58.99:8888/urls 这个应该是文件而不是文件夹,你直接用
bin/hadoop fs -ls 查看下你的urls文件在哪里 目录是什么 然后直接写上全路径或是相对路径 (根据实际情况)
完整路径:hdfs://202.193.58.99:8888/urls/url
错误还是一模一样。我好想在哪儿看过,这里用文件夹和集体的文件 都可以!
7 楼
黎明lm
2012-02-24
schaha123 写道
我的执行命令是
runtime/deploy/bin/nutch crawl hdfs://202.193.58.99:8888/urls -dir crawltest -depth 3 -topN 8
urls是HDFS上url地址的文件夹
runtime/deploy/bin/nutch crawl hdfs://202.193.58.99:8888/urls -dir crawltest -depth 3 -topN 8
urls是HDFS上url地址的文件夹
hdfs://202.193.58.99:8888/urls 这个应该是文件而不是文件夹,你直接用
bin/hadoop fs -ls 查看下你的urls文件在哪里 目录是什么 然后直接写上全路径或是相对路径 (根据实际情况)
6 楼
schaha123
2012-02-24
我的执行命令是
runtime/deploy/bin/nutch crawl hdfs://202.193.58.99:8888/urls -dir crawltest -depth 3 -topN 8
urls是HDFS上url地址的文件夹
runtime/deploy/bin/nutch crawl hdfs://202.193.58.99:8888/urls -dir crawltest -depth 3 -topN 8
urls是HDFS上url地址的文件夹
5 楼
schaha123
2012-02-24
黎明lm 写道
schaha123 写道
楼主,那个Nutch1.3是不是还有配置细节忘记写出来了啊,我的hadoop平台确定是正确的。按照配置还是运行部起来。出现错误:
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at org.apache.nutch.crawl.Injector.inject(Injector.java:217)
.............
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at org.apache.nutch.crawl.Injector.inject(Injector.java:217)
.............
可以多贴点错误日志?
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at org.apache.nutch.crawl.Injector.inject(Injector.java:217)
at org.apache.nutch.crawl.Crawl.run(Crawl.java:127)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
4 楼
黎明lm
2012-02-24
schaha123 写道
楼主,那个Nutch1.3是不是还有配置细节忘记写出来了啊,我的hadoop平台确定是正确的。按照配置还是运行部起来。出现错误:
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at org.apache.nutch.crawl.Injector.inject(Injector.java:217)
.............
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at org.apache.nutch.crawl.Injector.inject(Injector.java:217)
.............
可以多贴点错误日志?
3 楼
schaha123
2012-02-24
楼主,那个Nutch1.3是不是还有配置细节忘记写出来了啊,我的hadoop平台确定是正确的。按照配置还是运行部起来。出现错误:
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at org.apache.nutch.crawl.Injector.inject(Injector.java:217)
.............
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at org.apache.nutch.crawl.Injector.inject(Injector.java:217)
.............
2 楼
黎明lm
2011-12-05
爬虫也是分布式的,crawl方法里里面 的编程是采用mapreduce 的方式
1 楼
chenyuxxgl
2011-12-05
请问,爬行也是分布式的吗?每台nutch都会去公网上抓取页面,还是仅仅利用了hadoop的存储和计算。
发表评论
-
博客地址变更
2013-08-16 10:29 1168all the guys of visiting the bl ... -
hadoop 源码分析(六)hadoop taskTracker 生成map 和reduce任务流程
2013-04-09 17:36 2718taskTracker 生成map reduce ... -
hadoop 源码分析(六)hadoop taskTracker 生成map 和reduce任务流程
2013-04-09 17:33 0taskTracker 生成map reduce ... -
hadoop 源码分析(五)hadoop 任务调度TaskScheduler
2013-04-01 11:07 3912hadoop mapreduce 之所有能够实现job的运行 ... -
hadoop 源码分析(四)JobTracker 添加job 到schduler 队列中
2013-03-29 18:37 2855启动 JobTracker 1. 进入main方法: ... -
hadoop 源码分析(三) hadoop RPC 机制
2013-03-28 15:13 2386Hadoop 通信机制采用自己编写的RPC. 相比于 ... -
hadoop 源码分析(二) jobClient 通过RPC 代理提交作业到JobTracker
2013-03-27 12:57 37021.JobClient 客户端类 通过 ... -
hadoop 源码分析(一) jobClient 提交到JobTracker
2013-03-26 13:41 3585Hadoop 用了2年多了.从最初一起创业的 ... -
RHadoop 安装教程
2013-02-01 17:18 1592RHadoop 环境安装 硬件: centos6 ... -
pig
2012-11-16 19:28 1183转自:http://www.hadoopor.c ... -
hadoop与hive的映射
2012-11-15 10:21 2346hadoop与hive的映射 ... -
hadoop distcp
2012-07-31 10:00 2793hadoop distcp 使用:distcp ... -
MapReduce中Mapper类和Reducer类4函数解析
2012-07-20 18:05 2099MapReduce中Mapper类和Reducer类4函数解析 ... -
hadoop metrics 各参数解释
2012-07-17 18:59 1485hadoop metrics 各参数解释 研究使用hadoo ... -
Hbase几种数据入库(load)方式比较
2012-07-17 14:52 13541. 预先生成HFile入库 这个地址有详细的说明http:/ ... -
Hadoop客户端环境配置
2012-05-11 14:59 1747Hadoop客户端环境配置 1. 安装客户端(通过端用户可以 ... -
hadoop 通过distcp进行并行复制
2012-05-02 15:25 2413通过distcp进行并行复制 前面的HDFS访问模型都集中于 ... -
linux crontab 执行hadoop脚本 关于hadoop环境变量引入
2012-04-10 12:11 0crontab问题 crontab的特点:PATH不全和无终 ... -
hadoop fs 命令封装
2012-04-09 09:39 0hadoop fs 命令封装 #!/usr/bin/env ... -
map-reduce编程核心问题
2012-02-22 13:38 12461-How do we break up a large p ...
相关推荐
Nutch+solr + hadoop相关框架搭建教程
nutch2.3+hbase0.94.14+hadoop1.2.1安装文档.txt )
nutch1.3在myclipse部署工程源码nutch1.3在myclipse部署工程源码nutch1.3在myclipse部署工程源码
Hadoop是Apache Lucene下的一个子项目,它最初是从Nutch项目中分离出来专门负责分布式存储以及分布式运算的项目。简单地说,Hadoop是一个实现可靠、可扩展、分布式运算的开源软件平台,它也是Google著名的分布式文件...
Nutch 1.3 学习笔记,讲的比较清楚的文档
配置好的Nutch1.3开发环境,解压后直接导入Eclipse Workspace即可,调试通过,默认爬163两层,解决Eclipse3.6+版本无基于源码创建工程选项问题
Windows下cygwin+MyEclipse 8.5+Nutch1.2+Tomcat 6.0 Windows下cygwin+MyEclipse 8.5+Nutch1.2+Tomcat 6.0 Windows下cygwin+MyEclipse 8.5+Nutch1.2+Tomcat 6.0
Hadoop分布式文件系统(HDFS)被设计成适合运行在通用硬件(commodity hardware)上的分布式文件系统。它和现有的分布式文件系统有很多共同点。但同时,它和其他的分布式文件系统的区别也是很明显的。HDFS是一个高度容错...
#资源达人分享计划#
Hadoop 分布式文件系统 (HDFS)是一个设计为用在普通硬件设备上的分布式文件系统。它与现有的分布式文件系统有很多近似的地方,但又和这些文件系统有很明显的不同。HDFS是高容错的,设计为部署在廉价硬件上的。HDFS对...
DougCutting和MikeCafarella在开发搜索引擎Nutch时对这两篇论文做了自己的实现,即同名的MapReduce和HDFS,合起来就是Hadoop。MapReduce的Dataflow如下图,原始数据经过mapper处理,再进行partition和sort,到达...
apache-nutch-1.3 的源码包,需要的可以看下
Hadoop分布式文件系统(HDFS)被设计成适合运行在通用硬件(commodity hardware)上的分布式文件系统。它和现有的分布式文件系统有很多共同点。但同时,它和其他的分布式文件系统的区别也是很明显的。HDFS是一个高度容错...
Hadoop分布式文件系统(HDFS)被设计成适合运行在通用硬件(commodityhardware)上的分布式文件系统。它和现有的分布式文件系统有很多共同点。但同时,它和其他的分布式文件系统的区别也是很明显的。HDFS是一个高度容错...
nutch爬虫,java也能做爬虫,不一定非得用python呦
nutch-1.3源码,java版本,其他请参看手册。
Doug Cutting和Mike Cafarella在开发搜索引擎Nutch时对这两篇论文做了自己的实现,即同名的MapReduce和HDFS,合起来就是Hadoop。 MapReduce的Data flow如下图,原始数据经过mapper处理,再进行partition和sort,...
Nutch-1.0分布式安装手册.rar,完整的
Nutch是一个由Java实现的,刚刚诞生开放源代码(open-source)的web搜索引擎。 尽管Web搜索是漫游Internet的基本要求, 但是现有web搜索引擎的数目却在下降. 并且这很有可能进一步... Nutch目前最新的版本为version1.3。