hadoop测试worldcount,统计每个单词出现的个数
一、首先创建新目录testFiles,并在目录下创建两个测试数据文本文件如下:
[root@SC-026 hadoop-1.0.3]# mkdir testFiles
[root@SC-026 hadoop-1.0.3]# cd testFiles/
[root@SC-026 testFiles]# echo "hello world, bye bye, world." > file1.txt
[root@SC-026 testFiles]# echo "hello hadoop, how are you? hadoop." > file2.txt
二、将本地文件系统上的./testFiles目录拷贝到HDFS的根目录下,目录名为input。
遇到问题,报错信息如下:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -put ./testFiles input
put: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/root/input. Name node is in safe mode.
问题说明hadoop的namenode处在安全模式下,通过以下方式就可以离开安全模式,再次执行拷贝就成功了:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfsadmin -safemode leave
Safe mode is OFF
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -put ./testFiles input
三、执行测试任务,输出到output:
[root@SC-026 hadoop-1.0.3]# bin/hadoop jar hadoop-examples-1.0.3.jar wordcount input output
12/08/31 09:21:34 INFO input.FileInputFormat: Total input paths to process : 2
12/08/31 09:21:34 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/08/31 09:21:34 WARN snappy.LoadSnappy: Snappy native library not loaded
12/08/31 09:21:35 INFO mapred.JobClient: Running job: job_201208310909_0001
12/08/31 09:21:36 INFO mapred.JobClient: map 0% reduce 0%
12/08/31 09:21:57 INFO mapred.JobClient: map 50% reduce 0%
12/08/31 09:22:00 INFO mapred.JobClient: map 100% reduce 0%
12/08/31 09:22:12 INFO mapred.JobClient: map 100% reduce 100%
12/08/31 09:22:16 INFO mapred.JobClient: Job complete: job_201208310909_0001
12/08/31 09:22:16 INFO mapred.JobClient: Counters: 29
12/08/31 09:22:16 INFO mapred.JobClient: Job Counters
12/08/31 09:22:16 INFO mapred.JobClient: Launched reduce tasks=1
12/08/31 09:22:16 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=27675
12/08/31 09:22:16 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
12/08/31 09:22:16 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
12/08/31 09:22:16 INFO mapred.JobClient: Launched map tasks=2
12/08/31 09:22:16 INFO mapred.JobClient: Data-local map tasks=2
12/08/31 09:22:16 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=14460
12/08/31 09:22:16 INFO mapred.JobClient: File Output Format Counters
12/08/31 09:22:16 INFO mapred.JobClient: Bytes Written=78
12/08/31 09:22:16 INFO mapred.JobClient: FileSystemCounters
12/08/31 09:22:16 INFO mapred.JobClient: FILE_BYTES_READ=136
12/08/31 09:22:16 INFO mapred.JobClient: HDFS_BYTES_READ=278
12/08/31 09:22:16 INFO mapred.JobClient: FILE_BYTES_WRITTEN=64909
12/08/31 09:22:16 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=78
12/08/31 09:22:16 INFO mapred.JobClient: File Input Format Counters
12/08/31 09:22:16 INFO mapred.JobClient: Bytes Read=64
12/08/31 09:22:16 INFO mapred.JobClient: Map-Reduce Framework
12/08/31 09:22:16 INFO mapred.JobClient: Map output materialized bytes=142
12/08/31 09:22:16 INFO mapred.JobClient: Map input records=2
12/08/31 09:22:16 INFO mapred.JobClient: Reduce shuffle bytes=142
12/08/31 09:22:16 INFO mapred.JobClient: Spilled Records=22
12/08/31 09:22:16 INFO mapred.JobClient: Map output bytes=108
12/08/31 09:22:16 INFO mapred.JobClient: CPU time spent (ms)=3480
12/08/31 09:22:16 INFO mapred.JobClient: Total committed heap usage (bytes)=411828224
12/08/31 09:22:16 INFO mapred.JobClient: Combine input records=11
12/08/31 09:22:16 INFO mapred.JobClient: SPLIT_RAW_BYTES=214
12/08/31 09:22:16 INFO mapred.JobClient: Reduce input records=11
12/08/31 09:22:16 INFO mapred.JobClient: Reduce input groups=10
12/08/31 09:22:16 INFO mapred.JobClient: Combine output records=11
12/08/31 09:22:16 INFO mapred.JobClient: Physical memory (bytes) snapshot=447000576
12/08/31 09:22:16 INFO mapred.JobClient: Reduce output records=10
12/08/31 09:22:16 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1634324480
12/08/31 09:22:16 INFO mapred.JobClient: Map output records=11
四、查看结果:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -cat output/*
are 1
bye 1
bye, 1
hadoop, 1
hadoop. 1
hello 2
how 1
world, 1
world. 1
you? 1
cat: File does not exist: /user/root/output/_logs
将结果从HDFS复制到本地再查看:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -get output output
[root@SC-026 hadoop-1.0.3]# cat output/*
cat: output/_logs: 是一个目录
are 1
bye 1
bye, 1
hadoop, 1
hadoop. 1
hello 2
how 1
world, 1
world. 1
you? 1
备注:bin/hadoop dfs –help 可以了解各种 HDFS命令的使用。
分享到:
相关推荐
自己写的压缩程序例子,一共学习备用。
Hadoop Real-World Solutions Cookbook source code 源代码
hadoop测试数据,人脸分析测试数据,适应学习大数据,记进行数据分析
1/Hadoop平台搭建及实例运行.doc; 2/hadoop常见测试问题_自测试.docx; 3/hadoop源代码分析.docx; 4/Hibench BenchMark suite.docx。
Hadoop Real World Solutions Cookbook - Second Edition PDF
hadoop性能测试报告
hadoop测试数据,包括201404公交出行数据样例, 车辆标注数据样例下载,大数据背景下社交聊天软件模型构建_省略_i_基于大学生群体-,多语种语料样例下载,人脸关键点样例下载,中文普通话样例下载
hadoop测试(1)---HDFS文件操作 完整测试代码, 相关文章:http://www.cnblogs.com/yinpengxiang/archive/2011/07/03/2096605.html
Hadoop集群测试报告
centos7安装hadoop2.6.0后,使用eclipse安装mapreduce插件,并测试是否正常使用
hadoop测试数据 美国历年出生人口文本数据,便于测试。
真实的用户上网行为数据,已经过清洗。数据量超100万,可直接使用。是学习大数据,进行测试的不二之选
本文档是对自己学习hadoop的一个小测试,适合初学hadoop的人士观看。
基于hadoop上的基本测试,很好的资源哦
hadoop单词计数程序
Hadoop集群测试报告
英文版电子书:Hadoop Real-world Solutions Cookbook
hadoop权威指南天气测试案例和执行脚本
Hadoop企业级大数据平台-测试报告