`
ibc789
  • 浏览: 4821 次
  • 性别: Icon_minigender_1
  • 来自: 南京
社区版块
存档分类
最新评论

Windows系统下Nutch-0.9安装配置及其测试抓取网页

 
阅读更多

这里要讲的只是对Nutch-0.9进行抓取网页功能进行配置,即感受一下Nutch的抓取程序的功能,至于测试如何在抓取程序工作完成之后,实现对抓取网页数据进行处理及其搜索程序的测试,在后面的文章中在详细学习研究了。

准备工作

1、Nutch-0.9的下载

Nutch-0.9可以到Apache去下载:http://apache.freelamp.com/lucene/nutch/。(现在最高版本是Nutch-1.0)

2、Cygwin的下载及其安装

文章http://hi.baidu.com/shirdrn/blog/item/b306db828d814aa40cf4d20b.html有关于Cygwin的安装。

3、JDK 1.6的下载安装及其配置

这个就没有必要说了。

配置过程

1、将解压缩的Nutch-0.9拷贝到目录Cygwin\home\SHIYANJUN下面,其中SHIYANJUN是一个用户名;

2、创建urls目录和及其url文件:在目录Cygwin\home\SHIYANJUN\nutch-0.9\下面创建名称为urls的目录,接着在目录Cygwin\home\SHIYANJUN\nutch-0.9\urls下面创建一个名称为url的文件(没有扩展名),打开url文件,输入想要抓取的网站,例如想要抓取新浪网的页面,可以直接在url文件中输入:

 

http://www.sina.com.cn/

3、配置crawl-urlfilter.txt文件:修改Cygwin\home\SHIYANJUN\nutch-0.9\conf\crawl-urlfilter.txt文件,如下所示,我要抓取新浪网:

 

# The url filter file used by the crawl command.

# Better for intranet crawling.
# Be sure to change MY.DOMAIN.NAME to your domain name.

# Each non-comment, non-blank line contains a regular expression
# prefixed by '+' or '-'. The first matching pattern in the file
# determines whether a URL is included or ignored. If no pattern
# matches, the URL is ignored.

# skip file:, ftp:, & mailto: urls
-^(file|ftp|mailto):

# skip image and other suffixes we can't yet parse
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$

# skip URLs containing certain characters as probable queries, etc.
-[?*!@=]

# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
-.*(/.+?)/.*?\1/.*?\1/

# accept hosts in MY.DOMAIN.NAME
# +^http://([a-z0-9]*\.)*MY.DOMAIN.NAME/

# accept hosts in MY.DOMAIN.NAME
+^http://([a-z0-9]*\.)*www.sina.com.cn/

# skip everything else
-.

将原来的MY.DOMAIN.NAME/修改成你所要抓取的网站,最后面的“/”一定要存在。

4、配置nutch-site.xml:修改Cygwin\home\SHIYANJUN\nutch-0.9\conf\nutch-site.xml文件,如下所示:

 

<?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
       <property>

<name>http.agent.name</name>

<value>nutch-0.9</value>

<description></description>

</property>

<property>

<name>http.agent.description</name>

<value>my agent</value>

<description></description>

</property>

<property>

<name>http.agent.url</name>

<value>http://www.baidu.com</value>

<description></description>

</property>

<property>

<name>http.agent.email</name>

<value>shirdrn@hotmail.com</value>

<description></description>

</property>

    </configuration>

这个有点Heritrix类似,需要指定agent信息。

5、创建log:在Cygwin\home\SHIYANJUN\nutch-0.9目录下面创建logs目录,并在logs目录下创建日志文件,可以根据自己的需要创建,例如我的日志文件为mynutchlog.log。

启动抓取程序

配置好上面的各项以后,就可以启动Nutch抓取程序了。

启动Cygwin,切换到/home/SHIYANJUN/nutch-0.9/目录,然后使用如下命令开始启动Nutch抓取程序:

 

$ sh ./bin/nutch crawl urls -dir mydir -depth 2 -threads 4 -topN 50 >&./logs/mynutchlog.log

对上述命令中使用的参数进行解释:

 

crawl

通知nutch.jar,执行crawl的main方法。

urls

存放需要爬行的url文件的目录,即目录Cygwin\home\SHIYANJUN\nutch-0.9\urls。

-dir mydir

爬行后文件保存的位置,这里保存在目录Cygwin\home\SHIYANJUN\nutch-0.9\mydir之下。

-depth 2

爬行次数,或者称为深度,不过还是觉得次数更贴切,建议测试时改为1。

-threads 4

指定并发的进程,这是设定为 4。

-topN 50

一个网站保存的最大页面数。

当Nutch抓取程序按照我们的设置进行抓取完成以后,就可以到目录Cygwin\home\SHIYANJUN\nutch-0.9\mydir下面查看抓取程序工作的结果了。

使用$ ls -l -R mydir查看如下:

 

mydir:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 crawldb
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 index
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 indexes
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 linkdb
drwxr-xr-x 4 SHIYANJUN None 0 Oct 3 19:04 segments

mydir/crawldb:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 current

mydir/crawldb/current:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/crawldb/current/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 3026 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:05 index

mydir/index:
total 160
-rw-r--r-- 1 SHIYANJUN None 120 Oct 3 19:05 _0.fdt
-rw-r--r-- 1 SHIYANJUN None   8 Oct 3 19:05 _0.fdx
-rw-r--r-- 1 SHIYANJUN None 66 Oct 3 19:05 _0.fnm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.frq
-rw-r--r-- 1 SHIYANJUN None   9 Oct 3 19:05 _0.nrm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.prx
-rw-r--r-- 1 SHIYANJUN None 31 Oct 3 19:05 _0.tii
-rw-r--r-- 1 SHIYANJUN None 241 Oct 3 19:05 _0.tis
-rw-r--r-- 1 SHIYANJUN None 20 Oct 3 19:05 segments.gen
-rw-r--r-- 1 SHIYANJUN None 41 Oct 3 19:05 segments_2

mydir/indexes:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/indexes/part-00000:
total 160
-rw-r--r-- 1 SHIYANJUN None 120 Oct 3 19:05 _0.fdt
-rw-r--r-- 1 SHIYANJUN None   8 Oct 3 19:05 _0.fdx
-rw-r--r-- 1 SHIYANJUN None 66 Oct 3 19:05 _0.fnm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.frq
-rw-r--r-- 1 SHIYANJUN None   9 Oct 3 19:05 _0.nrm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.prx
-rw-r--r-- 1 SHIYANJUN None 31 Oct 3 19:05 _0.tii
-rw-r--r-- 1 SHIYANJUN None 241 Oct 3 19:05 _0.tis
-rw-r--r-- 1 SHIYANJUN None   0 Oct 3 19:05 index.done
-rw-r--r-- 1 SHIYANJUN None 20 Oct 3 19:05 segments.gen
-rw-r--r-- 1 SHIYANJUN None 41 Oct 3 19:05 segments_2

mydir/linkdb:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 current

mydir/linkdb/current:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/linkdb/current/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 4464 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 219 Oct 3 19:05 index

mydir/segments:
total 0
drwxr-xr-x 8 SHIYANJUN None 0 Oct 3 19:04 20081003190403
drwxr-xr-x 8 SHIYANJUN None 0 Oct 3 19:04 20081003190421

mydir/segments/20081003190403:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 content
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 crawl_fetch
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_generate
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_parse
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 parse_data
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 parse_text

mydir/segments/20081003190403/content:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/content/part-00000:
total 48
-rw-r--r-- 1 SHIYANJUN None 17091 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None   216 Oct 3 19:04 index

mydir/segments/20081003190403/crawl_fetch:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/crawl_fetch/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 239 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index

mydir/segments/20081003190403/crawl_generate:
total 16
-rw-r--r-- 1 SHIYANJUN None 168 Oct 3 19:04 part-00000

mydir/segments/20081003190403/crawl_parse:
total 16
-rw-r--r-- 1 SHIYANJUN None 2071 Oct 3 19:04 part-00000

mydir/segments/20081003190403/parse_data:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/parse_data/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 1302 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index

mydir/segments/20081003190403/parse_text:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/parse_text/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 201 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index

mydir/segments/20081003190421:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 content
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 crawl_fetch
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_generate
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 crawl_parse
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 parse_data
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 parse_text

mydir/segments/20081003190421/content:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/content/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 3526 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 229 Oct 3 19:05 index

mydir/segments/20081003190421/crawl_fetch:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/crawl_fetch/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 3212 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 229 Oct 3 19:05 index

mydir/segments/20081003190421/crawl_generate:
total 16
-rw-r--r-- 1 SHIYANJUN None 1938 Oct 3 19:04 part-00000

mydir/segments/20081003190421/crawl_parse:
total 16
-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 part-00000

mydir/segments/20081003190421/parse_data:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/parse_data/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 128 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 index

mydir/segments/20081003190421/parse_text:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/parse_text/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 128 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 index

而且,可以查看Nutch抓取程序的日志文件,来了解Nutch的行为。通过日志文件,可以看到我们对Nutch抓取程序的具体配置情况,以及在Nutch抓取程序执行抓取工作的过程中是如何抓取页面的,都抓取到了哪些内容,都做了哪些处理,在这里可以一目了然:

 

crawl started in: mydir
rootUrlDir = urls
threads = 4
depth = 2
topN = 50
Injector: starting
Injector: crawlDb: mydir/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: Merging injected urls into crawl db.
Injector: done
Generator: Selecting best-scoring urls due for fetch.
Generator: starting
Generator: segment: mydir/segments/20081003190403
Generator: filtering: false
Generator: topN: 50
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls by host, for politeness.
Generator: done.
Fetcher: starting
Fetcher: segment: mydir/segments/20081003190403
Fetcher: threads: 4
fetching http://www.sina.com.cn/
Fetcher: done
CrawlDb update: starting
CrawlDb update: db: mydir/crawldb
CrawlDb update: segments: [mydir/segments/20081003190403]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: Merging segment data into db.
CrawlDb update: done
Generator: Selecting best-scoring urls due for fetch.
Generator: starting
Generator: segment: mydir/segments/20081003190421
Generator: filtering: false
Generator: topN: 50
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls by host, for politeness.
Generator: done.
Fetcher: starting
Fetcher: segment: mydir/segments/20081003190421
Fetcher: threads: 4
fetching http://www.sina.com.cn/}};7(6.$(b)){7(z){6.$(b).C(
fetching http://www.sina.com.cn/16.1G.
fetching http://www.sina.com.cn/,n,9)};7(z){6.$(b).C(
fetching http://www.sina.com.cn/sina.com.cn
fetching http://www.sina.com.cn/].F(
fetching http://www.sina.com.cn/_400.html
fetching http://www.sina.com.cn/2u.27
fetching http://www.sina.com.cn/2v.27
fetching http://www.sina.com.cn/2s/2t
fetching http://www.sina.com.cn/].1e()
fetching http://www.sina.com.cn/,n)}w{6.$(b).D(
fetching http://www.sina.com.cn/,o)}w{6.$(b).D(
fetching http://www.sina.com.cn/,n)}w{5.t.D(
fetching http://www.sina.com.cn/6.u[
fetching http://www.sina.com.cn/,o)}w{5.t.D(
fetching http://www.sina.com.cn/);7(z){5.t.C(
fetching http://www.sina.com.cn/,n,9)};7(z){5.t.C(
fetching http://www.sina.com.cn/1.0
fetching http://www.sina.com.cn/document.all.
fetching http://www.sina.com.cn/1B.21();
Fetcher: done
CrawlDb update: starting
CrawlDb update: db: mydir/crawldb
CrawlDb update: segments: [mydir/segments/20081003190421]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: Merging segment data into db.
CrawlDb update: done
LinkDb: starting
LinkDb: linkdb: mydir/linkdb
LinkDb: URL normalize: true
LinkDb: URL filter: true
LinkDb: adding segment: mydir/segments/20081003190403
LinkDb: adding segment: mydir/segments/20081003190421
LinkDb: done
Indexer: starting
Indexer: linkdb: mydir/linkdb
Indexer: adding segment: mydir/segments/20081003190403
Indexer: adding segment: mydir/segments/20081003190421
Indexing [http://www.sina.com.cn/] with analyzer org.apache.nutch.analysis.NutchDocumentAnalyzer@1b82d69 (null)
Optimizing index.
merging segments _ram_0 (1 docs) into _0 (1 docs)
Indexer: done
Dedup: starting
Dedup: adding indexes in: mydir/indexes
Dedup: done
merging indexes to: mydir/index
Adding mydir/indexes/part-00000
done merging
crawl finished: mydir

其实,在Nutch执行抓取程序的时候,并不是只是抓取网页,它在这个过程中还对抓取到的网页文件进行了处理,通过日志文件可以看到,或者通过mydir目录下生成的文件来看,有点Lucene建立索引的文件痕迹。也就是说,这里对抓取到的网页文件进行里处理并进行了索引,生成了索引文件,可以执行检索的行为了。

分享到:
评论
1 楼 kimmking 2010-01-11  
<div class="quote_title">剽窃 http://hi.baidu.com/shirdrn/blog/item/f92312ef58a260e9ce1b3ef9.html</div>
<div class="quote_title">ibc789 写道</div>
<div class="quote_div">
<table style="width: 100%;" border="0"><tbody><tr>
<td>
<div id="blog_text" class="cnt">
<p>这里要讲的只是对Nutch-0.9进行抓取网页功能进行配置,即感受一下Nutch的抓取程序的功能,至于测试如何在抓取程序工作完成之后,实现对抓取网页数据进行处理及其搜索程序的测试,在后面的文章中在详细学习研究了。</p>
<p><strong>准备工作</strong></p>
<p>1、Nutch-0.9的下载</p>
<p>Nutch-0.9可以到Apache去下载:<a href="http://apache.freelamp.com/lucene/nutch/">http://apache.freelamp.com/lucene/nutch/</a>。(现在最高版本是Nutch-1.0)</p>
<p>2、Cygwin的下载及其安装</p>
<p>文章<a href="http://hi.baidu.com/shirdrn/blog/item/b306db828d814aa40cf4d20b.html">http://hi.baidu.com/shirdrn/blog/item/b306db828d814aa40cf4d20b.html</a>有关于Cygwin的安装。</p>
<p>3、JDK 1.6的下载安装及其配置</p>
<p>这个就没有必要说了。</p>
<p><strong>配置过程</strong></p>
<p>1、将解压缩的Nutch-0.9拷贝到目录Cygwin\home\SHIYANJUN下面,其中SHIYANJUN是一个用户名;</p>
<p>2、创建urls目录和及其url文件:在目录Cygwin\home\SHIYANJUN\nutch-0.9\下面创建名称为urls的目录,接着在目录Cygwin\home\SHIYANJUN\nutch-0.9\urls下面创建一个名称为url的文件(没有扩展名),打开url文件,输入想要抓取的网站,例如想要抓取新浪网的页面,可以直接在url文件中输入:</p>
<p> </p>
<table border="1" cellspacing="0" cellpadding="3" width="100%"><tbody><tr>
<td><a href="http://www.sina.com.cn/">http://www.sina.com.cn/</a></td>
</tr></tbody></table>
<p>3、配置crawl-urlfilter.txt文件:修改Cygwin\home\SHIYANJUN\nutch-0.9\conf\crawl-urlfilter.txt文件,如下所示,我要抓取新浪网:</p>
<p> </p>
<table border="1" cellspacing="0" cellpadding="3" width="100%"><tbody><tr>
<td>
<p># The url filter file used by the crawl command.</p>
<p># Better for intranet crawling.<br># Be sure to change MY.DOMAIN.NAME to your domain name.</p>
<p># Each non-comment, non-blank line contains a regular expression<br># prefixed by '+' or '-'. The first matching pattern in the file<br># determines whether a URL is included or ignored. If no pattern<br># matches, the URL is ignored.</p>
<p># skip file:, ftp:, &amp; mailto: urls<br>-^(file|ftp|mailto):</p>
<p># skip image and other suffixes we can't yet parse<br>-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$</p>
<p># skip URLs containing certain characters as probable queries, etc.<br>-[?*!@=]</p>
<p># skip URLs with slash-delimited segment that repeats 3+ times, to break loops<br>-.*(/.+?)/.*?\1/.*?\1/</p>
<p># accept hosts in MY.DOMAIN.NAME<br># +^http://([a-z0-9]*\.)*MY.DOMAIN.NAME/</p>
<p># accept hosts in MY.DOMAIN.NAME<br>+^http://([a-z0-9]*\.)*<span style="color: #ff0000;">www.sina.com.cn/</span></p>
<p># skip everything else<br>-.</p>
</td>
</tr></tbody></table>
<p>将原来的MY.DOMAIN.NAME/修改成你所要抓取的网站,最后面的“/”一定要存在。</p>
<p>4、配置nutch-site.xml:修改Cygwin\home\SHIYANJUN\nutch-0.9\conf\nutch-site.xml文件,如下所示:</p>
<p> </p>
<table border="1" cellspacing="0" cellpadding="3" width="100%"><tbody><tr>
<td>
<p>&lt;?xml version="1.0"?&gt;<br>    &lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;</p>
<p>    &lt;!-- Put site-specific property overrides in this file. --&gt;</p>
<p>    &lt;configuration&gt;<br>       &lt;property&gt; </p>
<p>&lt;name&gt;http.agent.name&lt;/name&gt;</p>
<p>&lt;value&gt;nutch-0.9&lt;/value&gt;</p>
<p>&lt;description&gt;&lt;/description&gt;</p>
<p>&lt;/property&gt;</p>
<p>&lt;property&gt;</p>
<p>&lt;name&gt;http.agent.description&lt;/name&gt;</p>
<p>&lt;value&gt;my agent&lt;/value&gt;</p>
<p>&lt;description&gt;&lt;/description&gt;</p>
<p>&lt;/property&gt;</p>
<p>&lt;property&gt;</p>
<p>&lt;name&gt;http.agent.url&lt;/name&gt;</p>
<p>&lt;value&gt;http://www.baidu.com&lt;/value&gt;</p>
<p>&lt;description&gt;&lt;/description&gt;</p>
<p>&lt;/property&gt;</p>
<p>&lt;property&gt;</p>
<p>&lt;name&gt;http.agent.email&lt;/name&gt;</p>
<p>&lt;value&gt;shirdrn@hotmail.com&lt;/value&gt;</p>
<p>&lt;description&gt;&lt;/description&gt;</p>
<p>&lt;/property&gt;</p>
<p>    &lt;/configuration&gt;</p>
</td>
</tr></tbody></table>
<p>这个有点Heritrix类似,需要指定agent信息。</p>
<p>5、创建log:在Cygwin\home\SHIYANJUN\nutch-0.9目录下面创建logs目录,并在logs目录下创建日志文件,可以根据自己的需要创建,例如我的日志文件为mynutchlog.log。</p>
<p><strong>启动抓取程序</strong></p>
<p>配置好上面的各项以后,就可以启动Nutch抓取程序了。</p>
<p>启动Cygwin,切换到/home/SHIYANJUN/nutch-0.9/目录,然后使用如下命令开始启动Nutch抓取程序:</p>
<p> </p>
<table border="1" cellspacing="0" cellpadding="3" width="100%"><tbody><tr>
<td>$ sh ./bin/nutch crawl urls -dir mydir -depth 2 -threads 4 -topN 50 &gt;&amp;./logs/mynutchlog.log</td>
</tr></tbody></table>
<p>对上述命令中使用的参数进行解释:</p>
<p> </p>
<table border="1" cellspacing="0" cellpadding="3" width="100%"><tbody><tr>
<td>
<p><strong>crawl</strong></p>
<p>通知nutch.jar,执行crawl的main方法。</p>
<p><strong>urls</strong></p>
<p>存放需要爬行的url文件的目录,即目录Cygwin\home\SHIYANJUN\nutch-0.9\urls。</p>
<p><strong>-dir</strong> mydir</p>
<p>爬行后文件保存的位置,这里保存在目录Cygwin\home\SHIYANJUN\nutch-0.9\mydir之下。</p>
<p><strong>-depth</strong> 2</p>
<p>爬行次数,或者称为深度,不过还是觉得次数更贴切,建议测试时改为1。</p>
<p><strong>-threads</strong> 4</p>
<p>指定并发的进程,这是设定为 4。</p>
<p><strong>-topN</strong> 50</p>
<p>一个网站保存的最大页面数。 </p>
</td>
</tr></tbody></table>
<p>当Nutch抓取程序按照我们的设置进行抓取完成以后,就可以到目录Cygwin\home\SHIYANJUN\nutch-0.9\mydir下面查看抓取程序工作的结果了。</p>
<p>使用$ ls -l -R mydir查看如下:</p>
<p> </p>
<table border="1" cellspacing="0" cellpadding="3" width="100%"><tbody><tr>
<td>
<p>mydir:<br>total 0<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 crawldb<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 index<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 indexes<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 linkdb<br>drwxr-xr-x 4 SHIYANJUN None 0 Oct 3 19:04 segments</p>
<p>mydir/crawldb:<br>total 0<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 current</p>
<p>mydir/crawldb/current:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000</p>
<p>mydir/crawldb/current/part-00000:<br>total 32<br>-rw-r--r-- 1 SHIYANJUN None 3026 Oct 3 19:05 data<br>-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:05 index</p>
<p>mydir/index:<br>total 160<br>-rw-r--r-- 1 SHIYANJUN None 120 Oct 3 19:05 _0.fdt<br>-rw-r--r-- 1 SHIYANJUN None   8 Oct 3 19:05 _0.fdx<br>-rw-r--r-- 1 SHIYANJUN None 66 Oct 3 19:05 _0.fnm<br>-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.frq<br>-rw-r--r-- 1 SHIYANJUN None   9 Oct 3 19:05 _0.nrm<br>-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.prx<br>-rw-r--r-- 1 SHIYANJUN None 31 Oct 3 19:05 _0.tii<br>-rw-r--r-- 1 SHIYANJUN None 241 Oct 3 19:05 _0.tis<br>-rw-r--r-- 1 SHIYANJUN None 20 Oct 3 19:05 segments.gen<br>-rw-r--r-- 1 SHIYANJUN None 41 Oct 3 19:05 segments_2</p>
<p>mydir/indexes:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000</p>
<p>mydir/indexes/part-00000:<br>total 160<br>-rw-r--r-- 1 SHIYANJUN None 120 Oct 3 19:05 _0.fdt<br>-rw-r--r-- 1 SHIYANJUN None   8 Oct 3 19:05 _0.fdx<br>-rw-r--r-- 1 SHIYANJUN None 66 Oct 3 19:05 _0.fnm<br>-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.frq<br>-rw-r--r-- 1 SHIYANJUN None   9 Oct 3 19:05 _0.nrm<br>-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.prx<br>-rw-r--r-- 1 SHIYANJUN None 31 Oct 3 19:05 _0.tii<br>-rw-r--r-- 1 SHIYANJUN None 241 Oct 3 19:05 _0.tis<br>-rw-r--r-- 1 SHIYANJUN None   0 Oct 3 19:05 index.done<br>-rw-r--r-- 1 SHIYANJUN None 20 Oct 3 19:05 segments.gen<br>-rw-r--r-- 1 SHIYANJUN None 41 Oct 3 19:05 segments_2</p>
<p>mydir/linkdb:<br>total 0<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 current</p>
<p>mydir/linkdb/current:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000</p>
<p>mydir/linkdb/current/part-00000:<br>total 32<br>-rw-r--r-- 1 SHIYANJUN None 4464 Oct 3 19:05 data<br>-rw-r--r-- 1 SHIYANJUN None 219 Oct 3 19:05 index</p>
<p>mydir/segments:<br>total 0<br>drwxr-xr-x 8 SHIYANJUN None 0 Oct 3 19:04 20081003190403<br>drwxr-xr-x 8 SHIYANJUN None 0 Oct 3 19:04 20081003190421</p>
<p>mydir/segments/20081003190403:<br>total 0<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 content<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 crawl_fetch<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_generate<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_parse<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 parse_data<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 parse_text</p>
<p>mydir/segments/20081003190403/content:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000</p>
<p>mydir/segments/20081003190403/content/part-00000:<br>total 48<br>-rw-r--r-- 1 SHIYANJUN None 17091 Oct 3 19:04 data<br>-rw-r--r-- 1 SHIYANJUN None   216 Oct 3 19:04 index</p>
<p>mydir/segments/20081003190403/crawl_fetch:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000</p>
<p>mydir/segments/20081003190403/crawl_fetch/part-00000:<br>total 32<br>-rw-r--r-- 1 SHIYANJUN None 239 Oct 3 19:04 data<br>-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index</p>
<p>mydir/segments/20081003190403/crawl_generate:<br>total 16<br>-rw-r--r-- 1 SHIYANJUN None 168 Oct 3 19:04 part-00000</p>
<p>mydir/segments/20081003190403/crawl_parse:<br>total 16<br>-rw-r--r-- 1 SHIYANJUN None 2071 Oct 3 19:04 part-00000</p>
<p>mydir/segments/20081003190403/parse_data:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000</p>
<p>mydir/segments/20081003190403/parse_data/part-00000:<br>total 32<br>-rw-r--r-- 1 SHIYANJUN None 1302 Oct 3 19:04 data<br>-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index</p>
<p>mydir/segments/20081003190403/parse_text:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000</p>
<p>mydir/segments/20081003190403/parse_text/part-00000:<br>total 32<br>-rw-r--r-- 1 SHIYANJUN None 201 Oct 3 19:04 data<br>-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index</p>
<p>mydir/segments/20081003190421:<br>total 0<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 content<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 crawl_fetch<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_generate<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 crawl_parse<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 parse_data<br>drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 parse_text</p>
<p>mydir/segments/20081003190421/content:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000</p>
<p>mydir/segments/20081003190421/content/part-00000:<br>total 32<br>-rw-r--r-- 1 SHIYANJUN None 3526 Oct 3 19:05 data<br>-rw-r--r-- 1 SHIYANJUN None 229 Oct 3 19:05 index</p>
<p>mydir/segments/20081003190421/crawl_fetch:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000</p>
<p>mydir/segments/20081003190421/crawl_fetch/part-00000:<br>total 32<br>-rw-r--r-- 1 SHIYANJUN None 3212 Oct 3 19:05 data<br>-rw-r--r-- 1 SHIYANJUN None 229 Oct 3 19:05 index</p>
<p>mydir/segments/20081003190421/crawl_generate:<br>total 16<br>-rw-r--r-- 1 SHIYANJUN None 1938 Oct 3 19:04 part-00000</p>
<p>mydir/segments/20081003190421/crawl_parse:<br>total 16<br>-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 part-00000</p>
<p>mydir/segments/20081003190421/parse_data:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000</p>
<p>mydir/segments/20081003190421/parse_data/part-00000:<br>total 32<br>-rw-r--r-- 1 SHIYANJUN None 128 Oct 3 19:05 data<br>-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 index</p>
<p>mydir/segments/20081003190421/parse_text:<br>total 0<br>drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000</p>
<p>mydir/segments/20081003190421/parse_text/part-00000:<br>total 32<br>-rw-r--r-- 1 SHIYANJUN None 128 Oct 3 19:05 data<br>-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 index</p>
</td>
</tr></tbody></table>
<p>而且,可以查看Nutch抓取程序的日志文件,来了解Nutch的行为。通过日志文件,可以看到我们对Nutch抓取程序的具体配置情况,以及在Nutch抓取程序执行抓取工作的过程中是如何抓取页面的,都抓取到了哪些内容,都做了哪些处理,在这里可以一目了然:</p>
<p> </p>
<table border="1" cellspacing="0" cellpadding="3" width="100%"><tbody><tr>
<td>crawl started in: mydir<br>rootUrlDir = urls<br>threads = 4<br>depth = 2<br>topN = 50<br>Injector: starting<br>Injector: crawlDb: mydir/crawldb<br>Injector: urlDir: urls<br>Injector: Converting injected urls to crawl db entries.<br>Injector: Merging injected urls into crawl db.<br>Injector: done<br>Generator: Selecting best-scoring urls due for fetch.<br>Generator: starting<br>Generator: segment: mydir/segments/20081003190403<br>Generator: filtering: false<br>Generator: topN: 50<br>Generator: jobtracker is 'local', generating exactly one partition.<br>Generator: Partitioning selected urls by host, for politeness.<br>Generator: done.<br>Fetcher: starting<br>Fetcher: segment: mydir/segments/20081003190403<br>Fetcher: threads: 4<br>fetching <a href="http://www.sina.com.cn/">http://www.sina.com.cn/</a><br>Fetcher: done<br>CrawlDb update: starting<br>CrawlDb update: db: mydir/crawldb<br>CrawlDb update: segments: [mydir/segments/20081003190403]<br>CrawlDb update: additions allowed: true<br>CrawlDb update: URL normalizing: true<br>CrawlDb update: URL filtering: true<br>CrawlDb update: Merging segment data into db.<br>CrawlDb update: done<br>Generator: Selecting best-scoring urls due for fetch.<br>Generator: starting<br>Generator: segment: mydir/segments/20081003190421<br>Generator: filtering: false<br>Generator: topN: 50<br>Generator: jobtracker is 'local', generating exactly one partition.<br>Generator: Partitioning selected urls by host, for politeness.<br>Generator: done.<br>Fetcher: starting<br>Fetcher: segment: mydir/segments/20081003190421<br>Fetcher: threads: 4<br>fetching <a href="http://www.sina.com.cn/%7D%7D;7(6.%24(b))%7B7(z)%7B6.%24(b).C">http://www.sina.com.cn/}};7(6.$(b)){7(z){6.$(b).C</a>(<br>fetching <a href="http://www.sina.com.cn/16.1G">http://www.sina.com.cn/16.1G</a>.<br>fetching <a href="http://www.sina.com.cn/,n,9)%7D;7(z)%7B6.%24(b).C">http://www.sina.com.cn/,n,9)};7(z){6.$(b).C</a>(<br>fetching <a href="http://www.sina.com.cn/sina.com.cn">http://www.sina.com.cn/sina.com.cn</a><br>fetching <a href="http://www.sina.com.cn/%5D.F">http://www.sina.com.cn/].F</a>(<br>fetching <a href="http://www.sina.com.cn/_400.html">http://www.sina.com.cn/_400.html</a><br>fetching <a href="http://www.sina.com.cn/2u.27">http://www.sina.com.cn/2u.27</a><br>fetching <a href="http://www.sina.com.cn/2v.27">http://www.sina.com.cn/2v.27</a><br>fetching <a href="http://www.sina.com.cn/2s/2t">http://www.sina.com.cn/2s/2t</a><br>fetching <a href="http://www.sina.com.cn/%5D.1e">http://www.sina.com.cn/].1e</a>()<br>fetching <a href="http://www.sina.com.cn/,n)%7Dw%7B6.%24(b).D">http://www.sina.com.cn/,n)}w{6.$(b).D</a>(<br>fetching <a href="http://www.sina.com.cn/,o)%7Dw%7B6.%24(b).D">http://www.sina.com.cn/,o)}w{6.$(b).D</a>(<br>fetching <a href="http://www.sina.com.cn/,n)%7Dw%7B5.t.D">http://www.sina.com.cn/,n)}w{5.t.D</a>(<br>fetching <a href="http://www.sina.com.cn/6.u">http://www.sina.com.cn/6.u</a>[<br>fetching <a href="http://www.sina.com.cn/,o)%7Dw%7B5.t.D">http://www.sina.com.cn/,o)}w{5.t.D</a>(<br>fetching <a href="http://www.sina.com.cn/);7(z)%7B5.t.C">http://www.sina.com.cn/);7(z){5.t.C</a>(<br>fetching <a href="http://www.sina.com.cn/,n,9)%7D;7(z)%7B5.t.C">http://www.sina.com.cn/,n,9)};7(z){5.t.C</a>(<br>fetching <a href="http://www.sina.com.cn/1.0">http://www.sina.com.cn/1.0</a><br>fetching <a href="http://www.sina.com.cn/document.all">http://www.sina.com.cn/document.all</a>.<br>fetching <a href="http://www.sina.com.cn/1B.21">http://www.sina.com.cn/1B.21</a>();<br>Fetcher: done<br>CrawlDb update: starting<br>CrawlDb update: db: mydir/crawldb<br>CrawlDb update: segments: [mydir/segments/20081003190421]<br>CrawlDb update: additions allowed: true<br>CrawlDb update: URL normalizing: true<br>CrawlDb update: URL filtering: true<br>CrawlDb update: Merging segment data into db.<br>CrawlDb update: done<br>LinkDb: starting<br>LinkDb: linkdb: mydir/linkdb<br>LinkDb: URL normalize: true<br>LinkDb: URL filter: true<br>LinkDb: adding segment: mydir/segments/20081003190403<br>LinkDb: adding segment: mydir/segments/20081003190421<br>LinkDb: done<br>Indexer: starting<br>Indexer: linkdb: mydir/linkdb<br>Indexer: adding segment: mydir/segments/20081003190403<br>Indexer: adding segment: mydir/segments/20081003190421<br>Indexing [http://www.sina.com.cn/] with analyzer <a href="mailto:org.apache.nutch.analysis.NutchDocumentAnalyzer@1b82d69">org.apache.nutch.analysis.NutchDocumentAnalyzer@1b82d69</a> (null)<br>Optimizing index.<br>merging segments _ram_0 (1 docs) into _0 (1 docs)<br>Indexer: done<br>Dedup: starting<br>Dedup: adding indexes in: mydir/indexes<br>Dedup: done<br>merging indexes to: mydir/index<br>Adding mydir/indexes/part-00000<br>done merging<br>crawl finished: mydir</td>
</tr></tbody></table>
<p>其实,在Nutch执行抓取程序的时候,并不是只是抓取网页,它在这个过程中还对抓取到的网页文件进行了处理,通过日志文件可以看到,或者通过mydir目录下生成的文件来看,有点Lucene建立索引的文件痕迹。也就是说,这里对抓取到的网页文件进行里处理并进行了索引,生成了索引文件,可以执行检索的行为了。</p>
</div>
</td>
</tr></tbody></table>
</div>
<p> </p>

相关推荐

Global site tag (gtag.js) - Google Analytics