论坛首页 Java企业应用论坛

Windows系统下Nutch-0.9安装配置及其测试抓取网页

浏览 2526 次
该帖已经被评为隐藏帖
作者 正文
   发表时间:2010-01-10   最后修改:2010-01-10

这里要讲的只是对Nutch-0.9进行抓取网页功能进行配置,即感受一下Nutch的抓取程序的功能,至于测试如何在抓取程序工作完成之后,实现对抓取网页数据进行处理及其搜索程序的测试,在后面的文章中在详细学习研究了。

准备工作

1、Nutch-0.9的下载

Nutch-0.9可以到Apache去下载:http://apache.freelamp.com/lucene/nutch/。(现在最高版本是Nutch-1.0)

2、Cygwin的下载及其安装

文章http://hi.baidu.com/shirdrn/blog/item/b306db828d814aa40cf4d20b.html有关于Cygwin的安装。

3、JDK 1.6的下载安装及其配置

这个就没有必要说了。

配置过程

1、将解压缩的Nutch-0.9拷贝到目录Cygwin\home\SHIYANJUN下面,其中SHIYANJUN是一个用户名;

2、创建urls目录和及其url文件:在目录Cygwin\home\SHIYANJUN\nutch-0.9\下面创建名称为urls的目录,接着在目录Cygwin\home\SHIYANJUN\nutch-0.9\urls下面创建一个名称为url的文件(没有扩展名),打开url文件,输入想要抓取的网站,例如想要抓取新浪网的页面,可以直接在url文件中输入:

 

http://www.sina.com.cn/

3、配置crawl-urlfilter.txt文件:修改Cygwin\home\SHIYANJUN\nutch-0.9\conf\crawl-urlfilter.txt文件,如下所示,我要抓取新浪网:

 

# The url filter file used by the crawl command.

# Better for intranet crawling.
# Be sure to change MY.DOMAIN.NAME to your domain name.

# Each non-comment, non-blank line contains a regular expression
# prefixed by '+' or '-'. The first matching pattern in the file
# determines whether a URL is included or ignored. If no pattern
# matches, the URL is ignored.

# skip file:, ftp:, & mailto: urls
-^(file|ftp|mailto):

# skip image and other suffixes we can't yet parse
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$

# skip URLs containing certain characters as probable queries, etc.
-[?*!@=]

# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
-.*(/.+?)/.*?\1/.*?\1/

# accept hosts in MY.DOMAIN.NAME
# +^http://([a-z0-9]*\.)*MY.DOMAIN.NAME/

# accept hosts in MY.DOMAIN.NAME
+^http://([a-z0-9]*\.)*www.sina.com.cn/

# skip everything else
-.

将原来的MY.DOMAIN.NAME/修改成你所要抓取的网站,最后面的“/”一定要存在。

4、配置nutch-site.xml:修改Cygwin\home\SHIYANJUN\nutch-0.9\conf\nutch-site.xml文件,如下所示:

 

<?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
       <property>

<name>http.agent.name</name>

<value>nutch-0.9</value>

<description></description>

</property>

<property>

<name>http.agent.description</name>

<value>my agent</value>

<description></description>

</property>

<property>

<name>http.agent.url</name>

<value>http://www.baidu.com</value>

<description></description>

</property>

<property>

<name>http.agent.email</name>

<value>shirdrn@hotmail.com</value>

<description></description>

</property>

    </configuration>

这个有点Heritrix类似,需要指定agent信息。

5、创建log:在Cygwin\home\SHIYANJUN\nutch-0.9目录下面创建logs目录,并在logs目录下创建日志文件,可以根据自己的需要创建,例如我的日志文件为mynutchlog.log。

启动抓取程序

配置好上面的各项以后,就可以启动Nutch抓取程序了。

启动Cygwin,切换到/home/SHIYANJUN/nutch-0.9/目录,然后使用如下命令开始启动Nutch抓取程序:

 

$ sh ./bin/nutch crawl urls -dir mydir -depth 2 -threads 4 -topN 50 >&./logs/mynutchlog.log

对上述命令中使用的参数进行解释:

 

crawl

通知nutch.jar,执行crawl的main方法。

urls

存放需要爬行的url文件的目录,即目录Cygwin\home\SHIYANJUN\nutch-0.9\urls。

-dir mydir

爬行后文件保存的位置,这里保存在目录Cygwin\home\SHIYANJUN\nutch-0.9\mydir之下。

-depth 2

爬行次数,或者称为深度,不过还是觉得次数更贴切,建议测试时改为1。

-threads 4

指定并发的进程,这是设定为 4。

-topN 50

一个网站保存的最大页面数。

当Nutch抓取程序按照我们的设置进行抓取完成以后,就可以到目录Cygwin\home\SHIYANJUN\nutch-0.9\mydir下面查看抓取程序工作的结果了。

使用$ ls -l -R mydir查看如下:

 

mydir:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 crawldb
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 index
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 indexes
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 linkdb
drwxr-xr-x 4 SHIYANJUN None 0 Oct 3 19:04 segments

mydir/crawldb:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 current

mydir/crawldb/current:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/crawldb/current/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 3026 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:05 index

mydir/index:
total 160
-rw-r--r-- 1 SHIYANJUN None 120 Oct 3 19:05 _0.fdt
-rw-r--r-- 1 SHIYANJUN None   8 Oct 3 19:05 _0.fdx
-rw-r--r-- 1 SHIYANJUN None 66 Oct 3 19:05 _0.fnm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.frq
-rw-r--r-- 1 SHIYANJUN None   9 Oct 3 19:05 _0.nrm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.prx
-rw-r--r-- 1 SHIYANJUN None 31 Oct 3 19:05 _0.tii
-rw-r--r-- 1 SHIYANJUN None 241 Oct 3 19:05 _0.tis
-rw-r--r-- 1 SHIYANJUN None 20 Oct 3 19:05 segments.gen
-rw-r--r-- 1 SHIYANJUN None 41 Oct 3 19:05 segments_2

mydir/indexes:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/indexes/part-00000:
total 160
-rw-r--r-- 1 SHIYANJUN None 120 Oct 3 19:05 _0.fdt
-rw-r--r-- 1 SHIYANJUN None   8 Oct 3 19:05 _0.fdx
-rw-r--r-- 1 SHIYANJUN None 66 Oct 3 19:05 _0.fnm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.frq
-rw-r--r-- 1 SHIYANJUN None   9 Oct 3 19:05 _0.nrm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.prx
-rw-r--r-- 1 SHIYANJUN None 31 Oct 3 19:05 _0.tii
-rw-r--r-- 1 SHIYANJUN None 241 Oct 3 19:05 _0.tis
-rw-r--r-- 1 SHIYANJUN None   0 Oct 3 19:05 index.done
-rw-r--r-- 1 SHIYANJUN None 20 Oct 3 19:05 segments.gen
-rw-r--r-- 1 SHIYANJUN None 41 Oct 3 19:05 segments_2

mydir/linkdb:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 current

mydir/linkdb/current:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/linkdb/current/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 4464 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 219 Oct 3 19:05 index

mydir/segments:
total 0
drwxr-xr-x 8 SHIYANJUN None 0 Oct 3 19:04 20081003190403
drwxr-xr-x 8 SHIYANJUN None 0 Oct 3 19:04 20081003190421

mydir/segments/20081003190403:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 content
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 crawl_fetch
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_generate
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_parse
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 parse_data
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 parse_text

mydir/segments/20081003190403/content:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/content/part-00000:
total 48
-rw-r--r-- 1 SHIYANJUN None 17091 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None   216 Oct 3 19:04 index

mydir/segments/20081003190403/crawl_fetch:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/crawl_fetch/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 239 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index

mydir/segments/20081003190403/crawl_generate:
total 16
-rw-r--r-- 1 SHIYANJUN None 168 Oct 3 19:04 part-00000

mydir/segments/20081003190403/crawl_parse:
total 16
-rw-r--r-- 1 SHIYANJUN None 2071 Oct 3 19:04 part-00000

mydir/segments/20081003190403/parse_data:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/parse_data/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 1302 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index

mydir/segments/20081003190403/parse_text:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/parse_text/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 201 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index

mydir/segments/20081003190421:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 content
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 crawl_fetch
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_generate
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 crawl_parse
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 parse_data
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 parse_text

mydir/segments/20081003190421/content:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/content/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 3526 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 229 Oct 3 19:05 index

mydir/segments/20081003190421/crawl_fetch:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/crawl_fetch/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 3212 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 229 Oct 3 19:05 index

mydir/segments/20081003190421/crawl_generate:
total 16
-rw-r--r-- 1 SHIYANJUN None 1938 Oct 3 19:04 part-00000

mydir/segments/20081003190421/crawl_parse:
total 16
-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 part-00000

mydir/segments/20081003190421/parse_data:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/parse_data/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 128 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 index

mydir/segments/20081003190421/parse_text:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/parse_text/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 128 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 index

而且,可以查看Nutch抓取程序的日志文件,来了解Nutch的行为。通过日志文件,可以看到我们对Nutch抓取程序的具体配置情况,以及在Nutch抓取程序执行抓取工作的过程中是如何抓取页面的,都抓取到了哪些内容,都做了哪些处理,在这里可以一目了然:

 

crawl started in: mydir
rootUrlDir = urls
threads = 4
depth = 2
topN = 50
Injector: starting
Injector: crawlDb: mydir/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: Merging injected urls into crawl db.
Injector: done
Generator: Selecting best-scoring urls due for fetch.
Generator: starting
Generator: segment: mydir/segments/20081003190403
Generator: filtering: false
Generator: topN: 50
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls by host, for politeness.
Generator: done.
Fetcher: starting
Fetcher: segment: mydir/segments/20081003190403
Fetcher: threads: 4
fetching http://www.sina.com.cn/
Fetcher: done
CrawlDb update: starting
CrawlDb update: db: mydir/crawldb
CrawlDb update: segments: [mydir/segments/20081003190403]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: Merging segment data into db.
CrawlDb update: done
Generator: Selecting best-scoring urls due for fetch.
Generator: starting
Generator: segment: mydir/segments/20081003190421
Generator: filtering: false
Generator: topN: 50
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls by host, for politeness.
Generator: done.
Fetcher: starting
Fetcher: segment: mydir/segments/20081003190421
Fetcher: threads: 4
fetching http://www.sina.com.cn/}};7(6.$(b)){7(z){6.$(b).C(
fetching http://www.sina.com.cn/16.1G.
fetching http://www.sina.com.cn/,n,9)};7(z){6.$(b).C(
fetching http://www.sina.com.cn/sina.com.cn
fetching http://www.sina.com.cn/].F(
fetching http://www.sina.com.cn/_400.html
fetching http://www.sina.com.cn/2u.27
fetching http://www.sina.com.cn/2v.27
fetching http://www.sina.com.cn/2s/2t
fetching http://www.sina.com.cn/].1e()
fetching http://www.sina.com.cn/,n)}w{6.$(b).D(
fetching http://www.sina.com.cn/,o)}w{6.$(b).D(
fetching http://www.sina.com.cn/,n)}w{5.t.D(
fetching http://www.sina.com.cn/6.u[
fetching http://www.sina.com.cn/,o)}w{5.t.D(
fetching http://www.sina.com.cn/);7(z){5.t.C(
fetching http://www.sina.com.cn/,n,9)};7(z){5.t.C(
fetching http://www.sina.com.cn/1.0
fetching http://www.sina.com.cn/document.all.
fetching http://www.sina.com.cn/1B.21();
Fetcher: done
CrawlDb update: starting
CrawlDb update: db: mydir/crawldb
CrawlDb update: segments: [mydir/segments/20081003190421]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: Merging segment data into db.
CrawlDb update: done
LinkDb: starting
LinkDb: linkdb: mydir/linkdb
LinkDb: URL normalize: true
LinkDb: URL filter: true
LinkDb: adding segment: mydir/segments/20081003190403
LinkDb: adding segment: mydir/segments/20081003190421
LinkDb: done
Indexer: starting
Indexer: linkdb: mydir/linkdb
Indexer: adding segment: mydir/segments/20081003190403
Indexer: adding segment: mydir/segments/20081003190421
Indexing [http://www.sina.com.cn/] with analyzer org.apache.nutch.analysis.NutchDocumentAnalyzer@1b82d69 (null)
Optimizing index.
merging segments _ram_0 (1 docs) into _0 (1 docs)
Indexer: done
Dedup: starting
Dedup: adding indexes in: mydir/indexes
Dedup: done
merging indexes to: mydir/index
Adding mydir/indexes/part-00000
done merging
crawl finished: mydir

其实,在Nutch执行抓取程序的时候,并不是只是抓取网页,它在这个过程中还对抓取到的网页文件进行了处理,通过日志文件可以看到,或者通过mydir目录下生成的文件来看,有点Lucene建立索引的文件痕迹。也就是说,这里对抓取到的网页文件进行里处理并进行了索引,生成了索引文件,可以执行检索的行为了。

   发表时间:2010-01-11   最后修改:2010-01-11
剽窃 http://hi.baidu.com/shirdrn/blog/item/f92312ef58a260e9ce1b3ef9.html
ibc789 写道

这里要讲的只是对Nutch-0.9进行抓取网页功能进行配置,即感受一下Nutch的抓取程序的功能,至于测试如何在抓取程序工作完成之后,实现对抓取网页数据进行处理及其搜索程序的测试,在后面的文章中在详细学习研究了。

准备工作

1、Nutch-0.9的下载

Nutch-0.9可以到Apache去下载:http://apache.freelamp.com/lucene/nutch/。(现在最高版本是Nutch-1.0)

2、Cygwin的下载及其安装

文章http://hi.baidu.com/shirdrn/blog/item/b306db828d814aa40cf4d20b.html有关于Cygwin的安装。

3、JDK 1.6的下载安装及其配置

这个就没有必要说了。

配置过程

1、将解压缩的Nutch-0.9拷贝到目录Cygwin\home\SHIYANJUN下面,其中SHIYANJUN是一个用户名;

2、创建urls目录和及其url文件:在目录Cygwin\home\SHIYANJUN\nutch-0.9\下面创建名称为urls的目录,接着在目录Cygwin\home\SHIYANJUN\nutch-0.9\urls下面创建一个名称为url的文件(没有扩展名),打开url文件,输入想要抓取的网站,例如想要抓取新浪网的页面,可以直接在url文件中输入:

 

http://www.sina.com.cn/

3、配置crawl-urlfilter.txt文件:修改Cygwin\home\SHIYANJUN\nutch-0.9\conf\crawl-urlfilter.txt文件,如下所示,我要抓取新浪网:

 

# The url filter file used by the crawl command.

# Better for intranet crawling.
# Be sure to change MY.DOMAIN.NAME to your domain name.

# Each non-comment, non-blank line contains a regular expression
# prefixed by '+' or '-'. The first matching pattern in the file
# determines whether a URL is included or ignored. If no pattern
# matches, the URL is ignored.

# skip file:, ftp:, & mailto: urls
-^(file|ftp|mailto):

# skip image and other suffixes we can't yet parse
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$

# skip URLs containing certain characters as probable queries, etc.
-[?*!@=]

# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
-.*(/.+?)/.*?\1/.*?\1/

# accept hosts in MY.DOMAIN.NAME
# +^http://([a-z0-9]*\.)*MY.DOMAIN.NAME/

# accept hosts in MY.DOMAIN.NAME
+^http://([a-z0-9]*\.)*www.sina.com.cn/

# skip everything else
-.

将原来的MY.DOMAIN.NAME/修改成你所要抓取的网站,最后面的“/”一定要存在。

4、配置nutch-site.xml:修改Cygwin\home\SHIYANJUN\nutch-0.9\conf\nutch-site.xml文件,如下所示:

 

<?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
       <property>

<name>http.agent.name</name>

<value>nutch-0.9</value>

<description></description>

</property>

<property>

<name>http.agent.description</name>

<value>my agent</value>

<description></description>

</property>

<property>

<name>http.agent.url</name>

<value>http://www.baidu.com</value>

<description></description>

</property>

<property>

<name>http.agent.email</name>

<value>shirdrn@hotmail.com</value>

<description></description>

</property>

    </configuration>

这个有点Heritrix类似,需要指定agent信息。

5、创建log:在Cygwin\home\SHIYANJUN\nutch-0.9目录下面创建logs目录,并在logs目录下创建日志文件,可以根据自己的需要创建,例如我的日志文件为mynutchlog.log。

启动抓取程序

配置好上面的各项以后,就可以启动Nutch抓取程序了。

启动Cygwin,切换到/home/SHIYANJUN/nutch-0.9/目录,然后使用如下命令开始启动Nutch抓取程序:

 

$ sh ./bin/nutch crawl urls -dir mydir -depth 2 -threads 4 -topN 50 >&./logs/mynutchlog.log

对上述命令中使用的参数进行解释:

 

crawl

通知nutch.jar,执行crawl的main方法。

urls

存放需要爬行的url文件的目录,即目录Cygwin\home\SHIYANJUN\nutch-0.9\urls。

-dir mydir

爬行后文件保存的位置,这里保存在目录Cygwin\home\SHIYANJUN\nutch-0.9\mydir之下。

-depth 2

爬行次数,或者称为深度,不过还是觉得次数更贴切,建议测试时改为1。

-threads 4

指定并发的进程,这是设定为 4。

-topN 50

一个网站保存的最大页面数。

当Nutch抓取程序按照我们的设置进行抓取完成以后,就可以到目录Cygwin\home\SHIYANJUN\nutch-0.9\mydir下面查看抓取程序工作的结果了。

使用$ ls -l -R mydir查看如下:

 

mydir:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 crawldb
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 index
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 indexes
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 linkdb
drwxr-xr-x 4 SHIYANJUN None 0 Oct 3 19:04 segments

mydir/crawldb:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 current

mydir/crawldb/current:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/crawldb/current/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 3026 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:05 index

mydir/index:
total 160
-rw-r--r-- 1 SHIYANJUN None 120 Oct 3 19:05 _0.fdt
-rw-r--r-- 1 SHIYANJUN None   8 Oct 3 19:05 _0.fdx
-rw-r--r-- 1 SHIYANJUN None 66 Oct 3 19:05 _0.fnm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.frq
-rw-r--r-- 1 SHIYANJUN None   9 Oct 3 19:05 _0.nrm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.prx
-rw-r--r-- 1 SHIYANJUN None 31 Oct 3 19:05 _0.tii
-rw-r--r-- 1 SHIYANJUN None 241 Oct 3 19:05 _0.tis
-rw-r--r-- 1 SHIYANJUN None 20 Oct 3 19:05 segments.gen
-rw-r--r-- 1 SHIYANJUN None 41 Oct 3 19:05 segments_2

mydir/indexes:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/indexes/part-00000:
total 160
-rw-r--r-- 1 SHIYANJUN None 120 Oct 3 19:05 _0.fdt
-rw-r--r-- 1 SHIYANJUN None   8 Oct 3 19:05 _0.fdx
-rw-r--r-- 1 SHIYANJUN None 66 Oct 3 19:05 _0.fnm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.frq
-rw-r--r-- 1 SHIYANJUN None   9 Oct 3 19:05 _0.nrm
-rw-r--r-- 1 SHIYANJUN None 23 Oct 3 19:05 _0.prx
-rw-r--r-- 1 SHIYANJUN None 31 Oct 3 19:05 _0.tii
-rw-r--r-- 1 SHIYANJUN None 241 Oct 3 19:05 _0.tis
-rw-r--r-- 1 SHIYANJUN None   0 Oct 3 19:05 index.done
-rw-r--r-- 1 SHIYANJUN None 20 Oct 3 19:05 segments.gen
-rw-r--r-- 1 SHIYANJUN None 41 Oct 3 19:05 segments_2

mydir/linkdb:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 current

mydir/linkdb/current:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/linkdb/current/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 4464 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 219 Oct 3 19:05 index

mydir/segments:
total 0
drwxr-xr-x 8 SHIYANJUN None 0 Oct 3 19:04 20081003190403
drwxr-xr-x 8 SHIYANJUN None 0 Oct 3 19:04 20081003190421

mydir/segments/20081003190403:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 content
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 crawl_fetch
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_generate
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_parse
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 parse_data
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:04 parse_text

mydir/segments/20081003190403/content:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/content/part-00000:
total 48
-rw-r--r-- 1 SHIYANJUN None 17091 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None   216 Oct 3 19:04 index

mydir/segments/20081003190403/crawl_fetch:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/crawl_fetch/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 239 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index

mydir/segments/20081003190403/crawl_generate:
total 16
-rw-r--r-- 1 SHIYANJUN None 168 Oct 3 19:04 part-00000

mydir/segments/20081003190403/crawl_parse:
total 16
-rw-r--r-- 1 SHIYANJUN None 2071 Oct 3 19:04 part-00000

mydir/segments/20081003190403/parse_data:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/parse_data/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 1302 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index

mydir/segments/20081003190403/parse_text:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 part-00000

mydir/segments/20081003190403/parse_text/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 201 Oct 3 19:04 data
-rw-r--r-- 1 SHIYANJUN None 216 Oct 3 19:04 index

mydir/segments/20081003190421:
total 0
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 content
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 crawl_fetch
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:04 crawl_generate
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 crawl_parse
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 parse_data
drwxr-xr-x 3 SHIYANJUN None 0 Oct 3 19:05 parse_text

mydir/segments/20081003190421/content:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/content/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 3526 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 229 Oct 3 19:05 index

mydir/segments/20081003190421/crawl_fetch:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/crawl_fetch/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 3212 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 229 Oct 3 19:05 index

mydir/segments/20081003190421/crawl_generate:
total 16
-rw-r--r-- 1 SHIYANJUN None 1938 Oct 3 19:04 part-00000

mydir/segments/20081003190421/crawl_parse:
total 16
-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 part-00000

mydir/segments/20081003190421/parse_data:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/parse_data/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 128 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 index

mydir/segments/20081003190421/parse_text:
total 0
drwxr-xr-x 2 SHIYANJUN None 0 Oct 3 19:05 part-00000

mydir/segments/20081003190421/parse_text/part-00000:
total 32
-rw-r--r-- 1 SHIYANJUN None 128 Oct 3 19:05 data
-rw-r--r-- 1 SHIYANJUN None 129 Oct 3 19:05 index

而且,可以查看Nutch抓取程序的日志文件,来了解Nutch的行为。通过日志文件,可以看到我们对Nutch抓取程序的具体配置情况,以及在Nutch抓取程序执行抓取工作的过程中是如何抓取页面的,都抓取到了哪些内容,都做了哪些处理,在这里可以一目了然:

 

crawl started in: mydir
rootUrlDir = urls
threads = 4
depth = 2
topN = 50
Injector: starting
Injector: crawlDb: mydir/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: Merging injected urls into crawl db.
Injector: done
Generator: Selecting best-scoring urls due for fetch.
Generator: starting
Generator: segment: mydir/segments/20081003190403
Generator: filtering: false
Generator: topN: 50
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls by host, for politeness.
Generator: done.
Fetcher: starting
Fetcher: segment: mydir/segments/20081003190403
Fetcher: threads: 4
fetching http://www.sina.com.cn/
Fetcher: done
CrawlDb update: starting
CrawlDb update: db: mydir/crawldb
CrawlDb update: segments: [mydir/segments/20081003190403]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: Merging segment data into db.
CrawlDb update: done
Generator: Selecting best-scoring urls due for fetch.
Generator: starting
Generator: segment: mydir/segments/20081003190421
Generator: filtering: false
Generator: topN: 50
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls by host, for politeness.
Generator: done.
Fetcher: starting
Fetcher: segment: mydir/segments/20081003190421
Fetcher: threads: 4
fetching http://www.sina.com.cn/}};7(6.$(b)){7(z){6.$(b).C(
fetching http://www.sina.com.cn/16.1G.
fetching http://www.sina.com.cn/,n,9)};7(z){6.$(b).C(
fetching http://www.sina.com.cn/sina.com.cn
fetching http://www.sina.com.cn/].F(
fetching http://www.sina.com.cn/_400.html
fetching http://www.sina.com.cn/2u.27
fetching http://www.sina.com.cn/2v.27
fetching http://www.sina.com.cn/2s/2t
fetching http://www.sina.com.cn/].1e()
fetching http://www.sina.com.cn/,n)}w{6.$(b).D(
fetching http://www.sina.com.cn/,o)}w{6.$(b).D(
fetching http://www.sina.com.cn/,n)}w{5.t.D(
fetching http://www.sina.com.cn/6.u[
fetching http://www.sina.com.cn/,o)}w{5.t.D(
fetching http://www.sina.com.cn/);7(z){5.t.C(
fetching http://www.sina.com.cn/,n,9)};7(z){5.t.C(
fetching http://www.sina.com.cn/1.0
fetching http://www.sina.com.cn/document.all.
fetching http://www.sina.com.cn/1B.21();
Fetcher: done
CrawlDb update: starting
CrawlDb update: db: mydir/crawldb
CrawlDb update: segments: [mydir/segments/20081003190421]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: Merging segment data into db.
CrawlDb update: done
LinkDb: starting
LinkDb: linkdb: mydir/linkdb
LinkDb: URL normalize: true
LinkDb: URL filter: true
LinkDb: adding segment: mydir/segments/20081003190403
LinkDb: adding segment: mydir/segments/20081003190421
LinkDb: done
Indexer: starting
Indexer: linkdb: mydir/linkdb
Indexer: adding segment: mydir/segments/20081003190403
Indexer: adding segment: mydir/segments/20081003190421
Indexing [http://www.sina.com.cn/] with analyzer org.apache.nutch.analysis.NutchDocumentAnalyzer@1b82d69 (null)
Optimizing index.
merging segments _ram_0 (1 docs) into _0 (1 docs)
Indexer: done
Dedup: starting
Dedup: adding indexes in: mydir/indexes
Dedup: done
merging indexes to: mydir/index
Adding mydir/indexes/part-00000
done merging
crawl finished: mydir

其实,在Nutch执行抓取程序的时候,并不是只是抓取网页,它在这个过程中还对抓取到的网页文件进行了处理,通过日志文件可以看到,或者通过mydir目录下生成的文件来看,有点Lucene建立索引的文件痕迹。也就是说,这里对抓取到的网页文件进行里处理并进行了索引,生成了索引文件,可以执行检索的行为了。

 

0 请登录后投票
论坛首页 Java企业应用版

跳转论坛:
Global site tag (gtag.js) - Google Analytics