`
sealbird
  • 浏览: 571076 次
  • 性别: Icon_minigender_1
  • 来自: 广州
社区版块
存档分类
最新评论

hadoop初步使用笔记

 
阅读更多
%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;D:\Program Files\Microsoft SQL Server\90\Tools\binn\;D:\Java\jdk1.6.0\bin;K:\cygwinnew\bin;D:\Program Files\Adobe\Flex Builder 3\sdks\3.2.0\bin;D:\MinGW\bin;D:\Program Files\Microsoft SQL Server\100\Tools\Binn\;D:\Program Files\Microsoft SQL Server\100\DTS\Binn\;D:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\;D:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies\;D:\Program Files\TortoiseSVN\bin;E:\xpdf\chinese-simplified;E:\xpdf\chinese-simplified\CMap


authorized_keys



/cygdrive/D/Java/jdk1.6.0
/cygdrive/D/tmp/testdata/input
/cygdrive/D/tmp/testoutput
D:\tmp\testoutput
hadoop namenode -formate

D:\tmp\testdata\input

上传 input下的文件到 dfs中的input文件中
$ ./bin/hadoop fs -put D:/tmp/testdata/input input

jar hadoop-0.20.1-examples.jar wordcount  input/input  output-dir    ,其中hadoop-0.16.4-examples.jar



$ ./bin/hadoop jar hadoop-0.20.1-examples.jar wordcount  input/input  output-di
r
11/12/28 17:39:40 INFO input.FileInputFormat: Total input paths to process : 3
11/12/28 17:39:41 INFO mapred.JobClient: Running job: job_201112281720_0003
11/12/28 17:39:42 INFO mapred.JobClient:  map 0% reduce 0%
11/12/28 17:39:51 INFO mapred.JobClient:  map 66% reduce 0%
11/12/28 17:39:54 INFO mapred.JobClient:  map 100% reduce 0%
11/12/28 17:40:03 INFO mapred.JobClient:  map 100% reduce 100%
11/12/28 17:40:05 INFO mapred.JobClient: Job complete: job_201112281720_0003
11/12/28 17:40:05 INFO mapred.JobClient: Counters: 17
11/12/28 17:40:05 INFO mapred.JobClient:   Job Counters
11/12/28 17:40:05 INFO mapred.JobClient:     Launched reduce tasks=1
11/12/28 17:40:05 INFO mapred.JobClient:     Launched map tasks=3
11/12/28 17:40:05 INFO mapred.JobClient:     Data-local map tasks=3
11/12/28 17:40:05 INFO mapred.JobClient:   FileSystemCounters
11/12/28 17:40:05 INFO mapred.JobClient:     FILE_BYTES_READ=290
11/12/28 17:40:05 INFO mapred.JobClient:     HDFS_BYTES_READ=161
11/12/28 17:40:05 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=607
11/12/28 17:40:05 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=139
11/12/28 17:40:05 INFO mapred.JobClient:   Map-Reduce Framework
11/12/28 17:40:05 INFO mapred.JobClient:     Reduce input groups=0
11/12/28 17:40:05 INFO mapred.JobClient:     Combine output records=16
11/12/28 17:40:05 INFO mapred.JobClient:     Map input records=3
11/12/28 17:40:05 INFO mapred.JobClient:     Reduce shuffle bytes=221
11/12/28 17:40:05 INFO mapred.JobClient:     Reduce output records=0
11/12/28 17:40:05 INFO mapred.JobClient:     Spilled Records=32
11/12/28 17:40:05 INFO mapred.JobClient:     Map output bytes=284
11/12/28 17:40:05 INFO mapred.JobClient:     Combine input records=30
11/12/28 17:40:05 INFO mapred.JobClient:     Map output records=30
11/12/28 17:40:05 INFO mapred.JobClient:     Reduce input records=16



bin/hadoop jar hadoop-0.20.1-index.jar org.apache.hadoop.contrib.index.main.UpdateIndex -inputPaths input/input -outputPath index_msg_out_010 -indexPath index_030 -numShards 2 -numMapTasks 2 -conf conf/index-config.xml


bin/hadoop jar hadoop-0.20.1-index.jar org.apache.hadoop.contrib.index.main.UpdateIndex -inputPaths D:/tmp/testdata/input -outputPath index_msg_out_010 -indexPath index_030 -numShards 1 -numMapTasks 1 -conf conf/index-config.xml

bin/hadoop jar hadoop-0.20.1-index.jar  -inputPaths D:/tmp/testdata/input -outputPath index_msg_out_010 -indexPath index_030 -numShards 1 -numMapTasks 1 -conf conf/index-config.xml


$ bin/hadoop jar hadoop-0.20.1-index.jar  -inputPaths input/input(dfs文件系统中的目录) -outputPath index_msg_out_010(dfs文件系统中的目录) -indexPath index_030 -numShards 1 -numMapTasks 1 -conf conf/index-config.xml
11/12/29 10:05:20 INFO main.UpdateIndex: inputPaths = input/input
11/12/29 10:05:20 INFO main.UpdateIndex: outputPath = index_msg_out_010
11/12/29 10:05:20 INFO main.UpdateIndex: shards     = null
11/12/29 10:05:20 INFO main.UpdateIndex: indexPath  = index_030
11/12/29 10:05:20 INFO main.UpdateIndex: numShards  = 1
11/12/29 10:05:20 INFO main.UpdateIndex: numMapTasks= 1
11/12/29 10:05:20 INFO main.UpdateIndex: confPath   = conf/index-config.xml
11/12/29 10:05:21 INFO main.UpdateIndex: sea.index.updater = org.apache.hadoop.c
ontrib.index.mapred.IndexUpdater
11/12/29 10:05:21 INFO mapred.IndexUpdater: mapred.input.dir = hdfs://localhost:
18888/user/kelo-dichan/administrator/input/input
11/12/29 10:05:21 INFO mapred.IndexUpdater: mapred.output.dir = hdfs://localhost
:18888/user/kelo-dichan/administrator/index_msg_out_010
11/12/29 10:05:21 INFO mapred.IndexUpdater: mapred.map.tasks = 1
11/12/29 10:05:21 INFO mapred.IndexUpdater: mapred.reduce.tasks = 1
11/12/29 10:05:21 INFO mapred.IndexUpdater: 1 shards = -1@index_030/00000@-1
11/12/29 10:05:21 INFO mapred.IndexUpdater: mapred.input.format.class = org.apac
he.hadoop.contrib.index.example.LineDocInputFormat
11/12/29 10:05:21 WARN mapred.JobClient: Use GenericOptionsParser for parsing th
e arguments. Applications should implement Tool for the same.
11/12/29 10:05:21 INFO mapred.FileInputFormat: Total input paths to process : 3
11/12/29 10:05:23 INFO mapred.JobClient: Running job: job_201112281720_0005
11/12/29 10:05:24 INFO mapred.JobClient:  map 0% reduce 0%


运行成功
$ bin/hadoop jar hadoop-0.20.1-index.jar  -inputPaths input/input -outputPath
index_msg_out_012 -indexPath index_032 -numShards 1 -numMapTasks 1 -conf conf/i
dex-config.xml

11/12/29 10:17:12 INFO main.UpdateIndex: inputPaths = input/input
11/12/29 10:17:12 INFO main.UpdateIndex: outputPath = index_msg_out_012
11/12/29 10:17:12 INFO main.UpdateIndex: shards     = null
11/12/29 10:17:12 INFO main.UpdateIndex: indexPath  = index_032
11/12/29 10:17:12 INFO main.UpdateIndex: numShards  = 1
11/12/29 10:17:12 INFO main.UpdateIndex: numMapTasks= 1
11/12/29 10:17:12 INFO main.UpdateIndex: confPath   = conf/index-config.xml
11/12/29 10:17:13 INFO main.UpdateIndex: sea.index.updater = org.apache.hadoop
ontrib.index.mapred.IndexUpdater
11/12/29 10:17:13 INFO mapred.IndexUpdater: mapred.input.dir = hdfs://localhos
18888/user/kelo-dichan/administrator/input/input
11/12/29 10:17:13 INFO mapred.IndexUpdater: mapred.output.dir = hdfs://localho
:18888/user/kelo-dichan/administrator/index_msg_out_012
11/12/29 10:17:13 INFO mapred.IndexUpdater: mapred.map.tasks = 1
11/12/29 10:17:13 INFO mapred.IndexUpdater: mapred.reduce.tasks = 1
11/12/29 10:17:13 INFO mapred.IndexUpdater: 1 shards = -1@index_032/00000@-1
11/12/29 10:17:13 INFO mapred.IndexUpdater: mapred.input.format.class = org.ap
he.hadoop.contrib.index.example.LineDocInputFormat
11/12/29 10:17:13 WARN mapred.JobClient: Use GenericOptionsParser for parsing
e arguments. Applications should implement Tool for the same.
11/12/29 10:17:13 INFO mapred.FileInputFormat: Total input paths to process :
11/12/29 10:17:13 INFO mapred.JobClient: Running job: job_201112291014_0002
11/12/29 10:17:14 INFO mapred.JobClient:  map 0% reduce 0%
11/12/29 10:17:22 INFO mapred.JobClient:  map 66% reduce 0%
11/12/29 10:17:25 INFO mapred.JobClient:  map 100% reduce 0%
11/12/29 10:17:32 INFO mapred.JobClient:  map 100% reduce 22%
11/12/29 10:17:38 INFO mapred.JobClient:  map 100% reduce 100%
11/12/29 10:17:40 INFO mapred.JobClient: Job complete: job_201112291014_0002
11/12/29 10:17:40 INFO mapred.JobClient: Counters: 18
11/12/29 10:17:40 INFO mapred.JobClient:   Job Counters
11/12/29 10:17:40 INFO mapred.JobClient:     Launched reduce tasks=1
11/12/29 10:17:40 INFO mapred.JobClient:     Launched map tasks=3
11/12/29 10:17:40 INFO mapred.JobClient:     Data-local map tasks=3
11/12/29 10:17:40 INFO mapred.JobClient:   FileSystemCounters
11/12/29 10:17:40 INFO mapred.JobClient:     FILE_BYTES_READ=2104
11/12/29 10:17:40 INFO mapred.JobClient:     HDFS_BYTES_READ=161
11/12/29 10:17:40 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=2910
11/12/29 10:17:40 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=622
11/12/29 10:17:40 INFO mapred.JobClient:   Map-Reduce Framework
11/12/29 10:17:40 INFO mapred.JobClient:     Reduce input groups=1
11/12/29 10:17:40 INFO mapred.JobClient:     Combine output records=3
11/12/29 10:17:40 INFO mapred.JobClient:     Map input records=3
11/12/29 10:17:40 INFO mapred.JobClient:     Reduce shuffle bytes=892
11/12/29 10:17:40 INFO mapred.JobClient:     Reduce output records=1
11/12/29 10:17:40 INFO mapred.JobClient:     Spilled Records=6
11/12/29 10:17:40 INFO mapred.JobClient:     Map output bytes=1302
11/12/29 10:17:40 INFO mapred.JobClient:     Map input bytes=161
11/12/29 10:17:40 INFO mapred.JobClient:     Combine input records=3
11/12/29 10:17:40 INFO mapred.JobClient:     Map output records=3
11/12/29 10:17:40 INFO mapred.JobClient:     Reduce input records=3
11/12/29 10:17:40 INFO main.UpdateIndex: Index update job is done
11/12/29 10:17:40 INFO main.UpdateIndex: Elapsed time is  27s
Elapsed time is 27s



Administrator@kelo-dichan /cygdrive/d/hadoop/run
$ bin/hadoop fs -copyToLocal  /user/kelo-dichan/administrator/*.* D:\tmp\testou
tput

Administrator@kelo-dichan /cygdrive/d/hadoop/run
$ bin/hadoop fs -copyToLocal  /user/kelo-dichan/administrator/*.* /cygdrive/D/tmp/testoutput

Administrator@kelo-dichan /cygdrive/d/hadoop/run
$

//取出hdfs文件系统中的目录下的所有文件
$ bin/hadoop fs -get  /user/kelo-dichan/administrator/index_032/00000/ D:/tmp/testoutput



$ bin/hadoop fs -get  /user/kelo-dichan/administrator/index_035/00000/ D:/tmp/testoutput


分布式索引简单总结
1、命令如下
$ bin/hadoop jar hadoop-0.20.1-index.jar  -inputPaths input/input -outputPath
index_msg_out_012 -indexPath index_032 -numShards 1 -numMapTasks 1 -conf conf/idex-config.xml
参数说明
        路径指的是hdfs分布式系统中的路径
-numShards
-numMapTasks
这两个数值不一样,如都为3时(输入路径中有3个文件),则索引结果少了数据(文档数据少了),暂不知道原因

如果改成组合文件,为是什么样呢




$ bin/hadoop jar hadoop-0.20.1-index.jar  -inputPaths input/input -outputPat
ndex_msg_out_020 -indexPath index_040 -numShards 3 -numMapTasks 3 -conf conf
dex-config.xml
11/12/29 11:39:19 INFO main.UpdateIndex: inputPaths = input/input
11/12/29 11:39:19 INFO main.UpdateIndex: outputPath = index_msg_out_020
11/12/29 11:39:19 INFO main.UpdateIndex: shards     = null
11/12/29 11:39:19 INFO main.UpdateIndex: indexPath  = index_040
11/12/29 11:39:19 INFO main.UpdateIndex: numShards  = 3
11/12/29 11:39:19 INFO main.UpdateIndex: numMapTasks= 3
11/12/29 11:39:19 INFO main.UpdateIndex: confPath   = conf/index-config.xml
11/12/29 11:39:20 INFO main.UpdateIndex: sea.index.updater = org.apache.hado
ontrib.index.mapred.IndexUpdater
11/12/29 11:39:20 INFO mapred.IndexUpdater: mapred.input.dir = hdfs://localh
18888/user/kelo-dichan/administrator/input/input
11/12/29 11:39:20 INFO mapred.IndexUpdater: mapred.output.dir = hdfs://local
:18888/user/kelo-dichan/administrator/index_msg_out_020
11/12/29 11:39:20 INFO mapred.IndexUpdater: mapred.map.tasks = 3
11/12/29 11:39:20 INFO mapred.IndexUpdater: mapred.reduce.tasks = 3
11/12/29 11:39:20 INFO mapred.IndexUpdater: 3 shards = -1@index_040/00000@-1
index_040/00001@-1,-1@index_040/00002@-1
11/12/29 11:39:20 INFO mapred.IndexUpdater: mapred.input.format.class = org.
he.hadoop.contrib.index.example.LineDocInputFormat
11/12/29 11:39:20 WARN mapred.JobClient: Use GenericOptionsParser for parsin
e arguments. Applications should implement Tool for the same.
11/12/29 11:39:20 INFO mapred.FileInputFormat: Total input paths to process
11/12/29 11:39:20 INFO mapred.JobClient: Running job: job_201112291106_0009
11/12/29 11:39:21 INFO mapred.JobClient:  map 0% reduce 0%
11/12/29 11:39:30 INFO mapred.JobClient:  map 33% reduce 0%
11/12/29 11:39:34 INFO mapred.JobClient:  map 100% reduce 0%
11/12/29 11:39:40 INFO mapred.JobClient:  map 100% reduce 7%
11/12/29 11:39:43 INFO mapred.JobClient:  map 100% reduce 14%
11/12/29 11:39:46 INFO mapred.JobClient:  map 100% reduce 40%
11/12/29 11:39:49 INFO mapred.JobClient:  map 100% reduce 66%
11/12/29 11:39:52 INFO mapred.JobClient:  map 100% reduce 100%
11/12/29 11:39:54 INFO mapred.JobClient: Job complete: job_201112291106_0009
11/12/29 11:39:54 INFO mapred.JobClient: Counters: 18
11/12/29 11:39:54 INFO mapred.JobClient:   Job Counters
11/12/29 11:39:54 INFO mapred.JobClient:     Launched reduce tasks=3
11/12/29 11:39:54 INFO mapred.JobClient:     Launched map tasks=3
11/12/29 11:39:54 INFO mapred.JobClient:     Data-local map tasks=3
11/12/29 11:39:54 INFO mapred.JobClient:   FileSystemCounters
11/12/29 11:39:54 INFO mapred.JobClient:     FILE_BYTES_READ=2648
11/12/29 11:39:54 INFO mapred.JobClient:     HDFS_BYTES_READ=161
11/12/29 11:39:54 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=3279
11/12/29 11:39:54 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=1025
11/12/29 11:39:54 INFO mapred.JobClient:   Map-Reduce Framework
11/12/29 11:39:54 INFO mapred.JobClient:     Reduce input groups=2
11/12/29 11:39:54 INFO mapred.JobClient:     Combine output records=3
11/12/29 11:39:54 INFO mapred.JobClient:     Map input records=3
11/12/29 11:39:54 INFO mapred.JobClient:     Reduce shuffle bytes=948
11/12/29 11:39:54 INFO mapred.JobClient:     Reduce output records=2
11/12/29 11:39:54 INFO mapred.JobClient:     Spilled Records=6
11/12/29 11:39:54 INFO mapred.JobClient:     Map output bytes=1350
11/12/29 11:39:54 INFO mapred.JobClient:     Map input bytes=161
11/12/29 11:39:54 INFO mapred.JobClient:     Combine input records=3
11/12/29 11:39:54 INFO mapred.JobClient:     Map output records=3
11/12/29 11:39:54 INFO mapred.JobClient:     Reduce input records=3
11/12/29 11:39:54 INFO main.UpdateIndex: Index update job is done
11/12/29 11:39:54 INFO main.UpdateIndex: Elapsed time is  33s
Elapsed time is 33s

文件系统是hadoop 分布式文件系统
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:8888</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem. file:/// hdfs://localhost:8888</description>
</property>

文件系统是本地,这种方式也很好 core_site.xml
<property>
  <name>fs.default.name</name>
  <value>file:///</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem. file:/// hdfs://localhost:8888</description>
</property>

以下是本地文件系统使用例字 testdata/input 本地的相对目录(全路径是D:/hadoop/run/testdata/input D:/hadoop/run是安装路径)
jar hadoop-0.20.1-examples.jar wordcount  testdata/input  output-dir1

bin/hadoop jar hadoop-0.20.1-index.jar  -inputPaths testdata/input -outputPath index_msg_out_012 -indexPath index_032 -numShards 1 -numMapTasks 1 -conf conf/idex-config.xml



hadoop-0.20.2

./bin/hadoop jar hadoop-0.20.2-examples.jar wordcount input/input output-di

/cygdrive/d/tmp/testdata/input



在eclipse  中使用mapreduce
  环境配置所需要的
         eclipse 3.3
hadoop  0.20.2 中的hadoop-0.20.2-eclipse-plugin.jar


    调式脚本启动(运行如下脚本后,在eclipse调试同一个程序,并使用远程调试方式(可配置))
     ./bin/hddebug jar hadoop-0.20.2-examples.jar wordcount input/input output-di

     Listening for transport dt_socket at address: 28888
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not
exist: hdfs://127.0.0.1:8888/user/kelo-dichan/administrator/input/input
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(File
InputFormat.java:224)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileI
nputFormat.java:241)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)

at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:7
79)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(Progra
mDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

详细步

1、参考

1、先在win7下配置好hadoop一般可使用
2、然后把bin/hadoop 脚本copy一份,重新命名,叫hddebug
3、并在hddebug中增加一行
如下 即在 if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then增加
HADOOP_OPTS="$HADOOP_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,address=28888,server=y,suspend=y"
4、运行
   ./bin/hddebug jar hadoop-0.20.2-examples.jar wordcount input/input output-di
    可看到 Listening for transport dt_socket at address: 28888
5、启动eclipse 调试wordcount这个代码
   菜单,调试-设置成远程调试即可进行调试了


 
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics