- 浏览: 290625 次
- 性别:
- 来自: 扬州
文章分类
最新评论
-
wanglujiede:
幸亏看了这里,关于store的proxy的params问题顶一 ...
ExtJS 4.0 的改变(仅发表我发现的) -
freddie:
现在都extjs5了,感觉extjs3-extjs4变化挺大的 ...
ExtJS 4.0 的改变--较为完整的介绍。 -
jiangzi100:
写的真的很好,输入EXT这个工具很烂
EXTJS组件化(一)----建立你的思想 -
我飞我是飞飞:
StringHttpMessageConverter,我是3. ...
StringHttpMessageConverter乱码问题的解决(Spring 3.2) -
restmad:
999
EXTJS组件化(一)----建立你的思想
说实话,Hadoop的入门学习配置比我想像中的要简单,当然,Hadoop本身是比较复杂的(那么厚厚的一本书就能说明问题)。
开发环境:Mac OS(Unix)
Hadoop版本:0.21.0
Eclipse版本: 3.6.0
第一步:下载Hadoop
下载地址:http://hadoop.apache.org/common/releases.html#Download
注意,当前21版本的Hadoop是不稳定、不支持并且不保证安全的最新版本。
第二步:配置Hadoop
将下载的Hadoop压缩文件解压缩,找到conf目录,打开core-site.xml,修改代码如下所示:
找到mapred-site.xml修改代码如下所示:
找到hdfs-site.xml修改代码如下所示:
找到hadoop-env.sh打开,加入以下配置
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home
export HADOOP_INSTALL=/Users/alex/Documents/DevRes/hadoop-0.21.0
export PATH=$PATH:$HADOOP_INSTALL/bin
其中,具体的目录根据你的实际情况配置。
第二步:配置SSH
windows版本可安装openssh,本章主要介绍Mac OS,打开“系统偏好设置”,找到共享,勾选远程登录,如下图所示:
第三步:运行Hadoop
打开终端,定位到Hadoop目录输入以下命令:
bin/hadoop namenode -format
此时终端输出:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
11/09/04 21:19:38 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.21.0
STARTUP_MSG: classpath = /Users/alex/documents/devres/hadoop-0.21.0/bin/../conf:/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/lib/tools.jar:/Users/alex/documents/devres/hadoop-0.21.0/bin/..:/Users/alex/documents/devres/hadoop-0.21.0/bin/../hadoop-common-0.21.0.ja
.........省略若干
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r 985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
************************************************************/
11/09/04 21:19:39 INFO namenode.FSNamesystem: defaultReplication = 1
11/09/04 21:19:39 INFO namenode.FSNamesystem: maxReplication = 512
11/09/04 21:19:39 INFO namenode.FSNamesystem: minReplication = 1
11/09/04 21:19:39 INFO namenode.FSNamesystem: maxReplicationStreams = 2
11/09/04 21:19:39 INFO namenode.FSNamesystem: shouldCheckForEnoughRacks = false
11/09/04 21:19:39 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/09/04 21:19:39 INFO namenode.FSNamesystem: fsOwner=alex
11/09/04 21:19:39 INFO namenode.FSNamesystem: supergroup=supergroup
11/09/04 21:19:39 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/09/04 21:19:39 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/09/04 21:19:39 INFO common.Storage: Image file of size 110 saved in 0 seconds.
11/09/04 21:19:39 INFO common.Storage: Storage directory /tmp/hadoop-alex/dfs/name has been successfully formatted.
11/09/04 21:19:39 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/
然后输入bin/start-all.sh
在启动期间会多次要求输入密码(你机器的登录密码)
此时终端输出:
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /Users/alex/documents/devres/hadoop-0.21.0/bin/../logs/hadoop-alex-namenode-localhost.out
Password:
localhost: starting datanode, logging to /Users/alex/documents/devres/hadoop-0.21.0/bin/../logs/hadoop-alex-datanode-localhost.out
Password:
localhost: starting secondarynamenode, logging to /Users/alex/documents/devres/hadoop-0.21.0/bin/../logs/hadoop-alex-secondarynamenode-localhost.out
starting jobtracker, logging to /Users/alex/documents/devres/hadoop-0.21.0/bin/../logs/hadoop-alex-jobtracker-localhost.out
Password:
localhost: starting tasktracker, logging to /Users/alex/documents/devres/hadoop-0.21.0/bin/../logs/hadoop-alex-tasktracker-localhost.out
第四步:检查启动
打开连接地址http://localhost:50070
如果找不到页面,请查看log日志(在hadoop目录中的log文件夹下),否则应当会看到以下页面:
然后再打开地址http://localhost:50030/
应当能看到以下页面(如果看不到,则证明你的jobtracker启动失败,请查看log):
至此,Hadoop配置成功。
第五步:测试
将一个文件拷贝到HDFS中,打开终端定位到Hadoop目录,输入以下命令:
bin/hadoop fs -copyFromLocal [源文件] [目标地址]
比如我输入的是:
bin/hadoop fs -copyFromLocal /Users/alex/desktop/persons.rtf hdfs://localhost/tmp/hadoop-alex
此时终端会输出以下文字:
11/09/04 21:23:45 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/09/04 21:23:45 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
这样代表成功。
第六步:配置Eclipse
首先找到目录hadoop-0.21.0/mapred/contrib/eclipse-plugin,然后将里面的eclipse-plugin复制到你的Eclipse的plugin文件夹中。
注意,如果该插件无法使用,本文下方有提供。如果还不能使用,请尝试在网络上寻找。
此时打开Eclipse,在Open Perspective中选择其它,找到Map/Reduce视图,打开它。
此时你会发现在Project Explorer和下面的视图中多了一个DFS Locations,右键新建Location如下图所示:
此时会打开一个窗口,对其进行配置,端口号就是我们刚才在core-site.xml中配置的端口好,既9000,如下图所示:
点击确定后即可在左边的Project Explorer面板中对DFS Location进行展开,如下图所示:
第六步:在Eclipse中写代码
在Map/Reduce视图中新建一个Map/Reduce项目(该项目与普通Java项目没什么区别),新建一个Java类,该类主要用于读取我们前面复制到HDFS中的那个文件内容:
然后右键->Run As->Run on hadoop,此时会让你选择一个location,就选中我们刚才新建的location即可。
因为我用了seek并给了一个位置,所以会打印两遍结果如下:
11/09/04 22:32:33 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
11/09/04 22:32:33 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
11/09/04 22:32:33 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/09/04 22:32:33 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
{\rtf1\ansi\ansicpg936\cocoartf1038\cocoasubrtf360
{\fonttbl\f0\fnil\fcharset134 STHeitiSC-Light;}
{\colortbl;\red255\green255\blue255;}
{\*\listtable{\list\listtemplateid1\listhybrid{\listlevel\levelnfc0\levelnfcn0\leveljc0\leveljcn0\levelfollow0\levelstartat1\levelspace360\levelindent0{\*\levelmarker \{decimal\}.}{\leveltext\leveltemplateid1\'02\'00.;}{\levelnumbers\'01;}\fi-360\li720\lin720 }{\listname ;}\listid1}}
{\*\listoverridetable{\listoverride\listid1\listoverridecount0\ls1}}
\paperw11900\paperh16840\margl1440\margr1440\vieww11680\viewh11900\viewkind0
\pard\tx220\tx720\tx1133\tx1700\tx2267\tx2834\tx3401\tx3968\tx4535\tx5102\tx5669\tx6236\tx6803\li720\fi-720\ql\qnatural\pardirnatural
\ls1\ilvl0
\f0\fs24 \cf0 {\listtext 1. }\'d0\'ec\'d6\'f9\'a3\'ac\
{\listtext 2. }\'c0\'ee\'cf\'fe\'c0\'f2\'a3\'ac\
{\listtext 3. }\'d5\'d4\'d6\'f9\'a3\'ac\
{\listtext 4. }\'d6\'ec\'b9\'e3\'b7\'ef\'a3\'ac\
{\listtext 5. }\'d0\'bb\'c0\'e8\'c0\'e8\'a3\'ac\
{\listtext 6. }\'d6\'ec\'ce\'b0\'a3\'ac\
{\listtext 7. }\'d5\'c5\'b5\'a4\'b5\'a4\'a3\'ac\
{\listtext 8. }\'c0\'ee\'b5\'a4\'a3\'ac\
{\listtext 9. }\'b2\'b7\'ce\'c4\'c4\'c8\'a3\'ac\
{\listtext 10. }\'d6\'dc\'ca\'ab\'a3\'ac\
{\listtext 11. }\'d5\'d4\'cd\'ee\'b6\'ab\'a3\'ac\
{\listtext 12. }\'b9\'f9\'b7\'ad\'b5\'dc\'a3\'ac\
{\listtext 13. }\'cc\'c6\'be\'fc\'b2\'a8\'a3\'ac\
{\listtext 14. }\'b8\'df\'cf\'e8\'a3\'ac\
{\listtext 15. }\'d1\'a6\'c3\'cd\'a3\'ac\
{\listtext 16. }\'ba\'fa\'b3\'a4\'ba\'d8\'a3\'ac\
{\listtext 17. }\'c0\'ee\'c7\'bf\'a3\'ac\
{\listtext 18. }\'c0\'ee\'bb\'d4\'a3\'ac\
{\listtext 19. }\'c8\'ce\'c5\'f4\'a3\'ac\
{\listtext 20. }\'d7\'db\'bc\'d2\'c7\'ed\'a3\'ac\
{\listtext 21. }\'b6\'a1\'d1\'de\'a3\'ac\
{\listtext 22. }\'b6\'ad\'c1\'c1\'a3\'ac\
{\listtext 23. }\'c3\'ab\'ba\'ec\'a3\'ac\
{\listtext 24. }\'c5\'ed\'c7\'e5\'d4\'b4\'a3\'ac\
{\listtext 25. }\'c3\'cf\'c7\'ec\'bf\'c2\'a3\'ac\
{\listtext 26. }\'c0\'ee\'b7\'e5\'a3\'ac\
{\listtext 27. }\'c2\'ed\'c1\'d6\'a3\'ac\
{\listtext 28. }\'b3\'c9\'b3\'cc\'a3\'ac\
{\listtext 29. }\'b7\'b6\'c9\'dc\'b4\'a8\'a3\'ac\
{\listtext 30. }\'c4\'aa\'b7\'bc\'a3\'ac\
{\listtext 31. }\'b6\'a1\'cc\'ce\'a3\'ac\
{\listtext 32. }\'b6\'c5\'d3\'b0\'a3\'ac\
{\listtext 33. }\'c0\'ee\'b7\'ef\'a3\'ac\
{\listtext 34. }\'ba\'ab\'c3\'f7\'c3\'f7\'a3\'ac\
{\listtext 35. }\'c9\'f2\'cf\'fe\'c0\'f6\'a3\'ac\
{\listtext 36. }\'d0\'ed\'b3\'c9\'a3\'ac\
{\listtext 37. }\'d5\'d4\'d5\'f1\'a3\'ac\
{\listtext 38. }\'c2\'de\'be\'b2\'a3\'ac\
{\listtext 39. }\'b6\'c5\'d3\'b0\'a3\'ac\
{\listtext 40. }\'d6\'ec\'b7\'ef\'a3\'ac\
{\listtext 41. }\'cb\'e5\'c0\'f6\'a3\'ac\
{\listtext 42. }\'ba\'fa\'d1\'f4\'d1\'f4\'a3\'a8\'ba\'fa\'d5\'f1\'a3\'a9\'a3\'ac\
{\listtext 43. }\'c2\'ed\'cf\'fe\'c3\'b7\
{\listtext 44. }\'b5\'cb\'b1\'f3\'a3\'ac\
{\listtext 45. }\'cf\'ee\'c3\'ce\'e9\'aa\'a3\'ac\
{\listtext 46. }\'d1\'ee\'c1\'f8\'a3\'ac\
{\listtext 47. }\'b8\'f0\'ea\'bb\
}{\rtf1\ansi\ansicpg936\cocoartf1038\cocoasubrtf360
{\fonttbl\f0\fnil\fcharset134 STHeitiSC-Light;}
{\colortbl;\red255\green255\blue255;}
{\*\listtable{\list\listtemplateid1\listhybrid{\listlevel\levelnfc0\levelnfcn0\leveljc0\leveljcn0\levelfollow0\levelstartat1\levelspace360\levelindent0{\*\levelmarker \{decimal\}.}{\leveltext\leveltemplateid1\'02\'00.;}{\levelnumbers\'01;}\fi-360\li720\lin720 }{\listname ;}\listid1}}
{\*\listoverridetable{\listoverride\listid1\listoverridecount0\ls1}}
\paperw11900\paperh16840\margl1440\margr1440\vieww11680\viewh11900\viewkind0
\pard\tx220\tx720\tx1133\tx1700\tx2267\tx2834\tx3401\tx3968\tx4535\tx5102\tx5669\tx6236\tx6803\li720\fi-720\ql\qnatural\pardirnatural
\ls1\ilvl0
\f0\fs24 \cf0 {\listtext 1. }\'d0\'ec\'d6\'f9\'a3\'ac\
{\listtext 2. }\'c0\'ee\'cf\'fe\'c0\'f2\'a3\'ac\
{\listtext 3. }\'d5\'d4\'d6\'f9\'a3\'ac\
{\listtext 4. }\'d6\'ec\'b9\'e3\'b7\'ef\'a3\'ac\
{\listtext 5. }\'d0\'bb\'c0\'e8\'c0\'e8\'a3\'ac\
{\listtext 6. }\'d6\'ec\'ce\'b0\'a3\'ac\
{\listtext 7. }\'d5\'c5\'b5\'a4\'b5\'a4\'a3\'ac\
{\listtext 8. }\'c0\'ee\'b5\'a4\'a3\'ac\
{\listtext 9. }\'b2\'b7\'ce\'c4\'c4\'c8\'a3\'ac\
{\listtext 10. }\'d6\'dc\'ca\'ab\'a3\'ac\
{\listtext 11. }\'d5\'d4\'cd\'ee\'b6\'ab\'a3\'ac\
{\listtext 12. }\'b9\'f9\'b7\'ad\'b5\'dc\'a3\'ac\
{\listtext 13. }\'cc\'c6\'be\'fc\'b2\'a8\'a3\'ac\
{\listtext 14. }\'b8\'df\'cf\'e8\'a3\'ac\
{\listtext 15. }\'d1\'a6\'c3\'cd\'a3\'ac\
{\listtext 16. }\'ba\'fa\'b3\'a4\'ba\'d8\'a3\'ac\
{\listtext 17. }\'c0\'ee\'c7\'bf\'a3\'ac\
{\listtext 18. }\'c0\'ee\'bb\'d4\'a3\'ac\
{\listtext 19. }\'c8\'ce\'c5\'f4\'a3\'ac\
{\listtext 20. }\'d7\'db\'bc\'d2\'c7\'ed\'a3\'ac\
{\listtext 21. }\'b6\'a1\'d1\'de\'a3\'ac\
{\listtext 22. }\'b6\'ad\'c1\'c1\'a3\'ac\
{\listtext 23. }\'c3\'ab\'ba\'ec\'a3\'ac\
{\listtext 24. }\'c5\'ed\'c7\'e5\'d4\'b4\'a3\'ac\
{\listtext 25. }\'c3\'cf\'c7\'ec\'bf\'c2\'a3\'ac\
{\listtext 26. }\'c0\'ee\'b7\'e5\'a3\'ac\
{\listtext 27. }\'c2\'ed\'c1\'d6\'a3\'ac\
{\listtext 28. }\'b3\'c9\'b3\'cc\'a3\'ac\
{\listtext 29. }\'b7\'b6\'c9\'dc\'b4\'a8\'a3\'ac\
{\listtext 30. }\'c4\'aa\'b7\'bc\'a3\'ac\
{\listtext 31. }\'b6\'a1\'cc\'ce\'a3\'ac\
{\listtext 32. }\'b6\'c5\'d3\'b0\'a3\'ac\
{\listtext 33. }\'c0\'ee\'b7\'ef\'a3\'ac\
{\listtext 34. }\'ba\'ab\'c3\'f7\'c3\'f7\'a3\'ac\
{\listtext 35. }\'c9\'f2\'cf\'fe\'c0\'f6\'a3\'ac\
{\listtext 36. }\'d0\'ed\'b3\'c9\'a3\'ac\
{\listtext 37. }\'d5\'d4\'d5\'f1\'a3\'ac\
{\listtext 38. }\'c2\'de\'be\'b2\'a3\'ac\
{\listtext 39. }\'b6\'c5\'d3\'b0\'a3\'ac\
{\listtext 40. }\'d6\'ec\'b7\'ef\'a3\'ac\
{\listtext 41. }\'cb\'e5\'c0\'f6\'a3\'ac\
{\listtext 42. }\'ba\'fa\'d1\'f4\'d1\'f4\'a3\'a8\'ba\'fa\'d5\'f1\'a3\'a9\'a3\'ac\
{\listtext 43. }\'c2\'ed\'cf\'fe\'c3\'b7\
{\listtext 44. }\'b5\'cb\'b1\'f3\'a3\'ac\
{\listtext 45. }\'cf\'ee\'c3\'ce\'e9\'aa\'a3\'ac\
{\listtext 46. }\'d1\'ee\'c1\'f8\'a3\'ac\
{\listtext 47. }\'b8\'f0\'ea\'bb\
}
rtf中存放的是中文。
==============文章结束=============
今天所学习到的地方就到这里。
一定要bin/start-all.sh才没有问题,晕死!
开发环境:Mac OS(Unix)
Hadoop版本:0.21.0
Eclipse版本: 3.6.0
第一步:下载Hadoop
下载地址:http://hadoop.apache.org/common/releases.html#Download
注意,当前21版本的Hadoop是不稳定、不支持并且不保证安全的最新版本。
第二步:配置Hadoop
将下载的Hadoop压缩文件解压缩,找到conf目录,打开core-site.xml,修改代码如下所示:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>localhost:9000</value> </property> </configuration>
找到mapred-site.xml修改代码如下所示:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>
找到hdfs-site.xml修改代码如下所示:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
找到hadoop-env.sh打开,加入以下配置
引用
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home
export HADOOP_INSTALL=/Users/alex/Documents/DevRes/hadoop-0.21.0
export PATH=$PATH:$HADOOP_INSTALL/bin
其中,具体的目录根据你的实际情况配置。
第二步:配置SSH
windows版本可安装openssh,本章主要介绍Mac OS,打开“系统偏好设置”,找到共享,勾选远程登录,如下图所示:
第三步:运行Hadoop
打开终端,定位到Hadoop目录输入以下命令:
bin/hadoop namenode -format
此时终端输出:
引用
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
11/09/04 21:19:38 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.21.0
STARTUP_MSG: classpath = /Users/alex/documents/devres/hadoop-0.21.0/bin/../conf:/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/lib/tools.jar:/Users/alex/documents/devres/hadoop-0.21.0/bin/..:/Users/alex/documents/devres/hadoop-0.21.0/bin/../hadoop-common-0.21.0.ja
.........省略若干
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r 985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
************************************************************/
11/09/04 21:19:39 INFO namenode.FSNamesystem: defaultReplication = 1
11/09/04 21:19:39 INFO namenode.FSNamesystem: maxReplication = 512
11/09/04 21:19:39 INFO namenode.FSNamesystem: minReplication = 1
11/09/04 21:19:39 INFO namenode.FSNamesystem: maxReplicationStreams = 2
11/09/04 21:19:39 INFO namenode.FSNamesystem: shouldCheckForEnoughRacks = false
11/09/04 21:19:39 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/09/04 21:19:39 INFO namenode.FSNamesystem: fsOwner=alex
11/09/04 21:19:39 INFO namenode.FSNamesystem: supergroup=supergroup
11/09/04 21:19:39 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/09/04 21:19:39 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/09/04 21:19:39 INFO common.Storage: Image file of size 110 saved in 0 seconds.
11/09/04 21:19:39 INFO common.Storage: Storage directory /tmp/hadoop-alex/dfs/name has been successfully formatted.
11/09/04 21:19:39 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/
然后输入bin/start-all.sh
在启动期间会多次要求输入密码(你机器的登录密码)
此时终端输出:
引用
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /Users/alex/documents/devres/hadoop-0.21.0/bin/../logs/hadoop-alex-namenode-localhost.out
Password:
localhost: starting datanode, logging to /Users/alex/documents/devres/hadoop-0.21.0/bin/../logs/hadoop-alex-datanode-localhost.out
Password:
localhost: starting secondarynamenode, logging to /Users/alex/documents/devres/hadoop-0.21.0/bin/../logs/hadoop-alex-secondarynamenode-localhost.out
starting jobtracker, logging to /Users/alex/documents/devres/hadoop-0.21.0/bin/../logs/hadoop-alex-jobtracker-localhost.out
Password:
localhost: starting tasktracker, logging to /Users/alex/documents/devres/hadoop-0.21.0/bin/../logs/hadoop-alex-tasktracker-localhost.out
第四步:检查启动
打开连接地址http://localhost:50070
如果找不到页面,请查看log日志(在hadoop目录中的log文件夹下),否则应当会看到以下页面:
然后再打开地址http://localhost:50030/
应当能看到以下页面(如果看不到,则证明你的jobtracker启动失败,请查看log):
至此,Hadoop配置成功。
第五步:测试
将一个文件拷贝到HDFS中,打开终端定位到Hadoop目录,输入以下命令:
bin/hadoop fs -copyFromLocal [源文件] [目标地址]
比如我输入的是:
bin/hadoop fs -copyFromLocal /Users/alex/desktop/persons.rtf hdfs://localhost/tmp/hadoop-alex
此时终端会输出以下文字:
引用
11/09/04 21:23:45 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/09/04 21:23:45 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
这样代表成功。
第六步:配置Eclipse
首先找到目录hadoop-0.21.0/mapred/contrib/eclipse-plugin,然后将里面的eclipse-plugin复制到你的Eclipse的plugin文件夹中。
注意,如果该插件无法使用,本文下方有提供。如果还不能使用,请尝试在网络上寻找。
此时打开Eclipse,在Open Perspective中选择其它,找到Map/Reduce视图,打开它。
此时你会发现在Project Explorer和下面的视图中多了一个DFS Locations,右键新建Location如下图所示:
此时会打开一个窗口,对其进行配置,端口号就是我们刚才在core-site.xml中配置的端口好,既9000,如下图所示:
点击确定后即可在左边的Project Explorer面板中对DFS Location进行展开,如下图所示:
第六步:在Eclipse中写代码
在Map/Reduce视图中新建一个Map/Reduce项目(该项目与普通Java项目没什么区别),新建一个Java类,该类主要用于读取我们前面复制到HDFS中的那个文件内容:
package cn.com.fri; import java.io.FileNotFoundException; import java.io.IOException; import java.net.URISyntaxException; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileContext; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.security.AccessControlException; public class Test { public static void main(String[] args) throws AccessControlException, FileNotFoundException, IOException, URISyntaxException { FileContext fc = FileContext .getFileContext();//如果运行在hadoop location中,不需要配置URI,否则需要给一个URI FSDataInputStream fsInput = fc.open(new Path( "/tmp/hadoop-alex/persons.rtf")); IOUtils.copyBytes(fsInput, System.out, 4090, false); fsInput.seek(0); IOUtils.copyBytes(fsInput, System.out, 4090, false); } }
然后右键->Run As->Run on hadoop,此时会让你选择一个location,就选中我们刚才新建的location即可。
因为我用了seek并给了一个位置,所以会打印两遍结果如下:
引用
11/09/04 22:32:33 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
11/09/04 22:32:33 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
11/09/04 22:32:33 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/09/04 22:32:33 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
{\rtf1\ansi\ansicpg936\cocoartf1038\cocoasubrtf360
{\fonttbl\f0\fnil\fcharset134 STHeitiSC-Light;}
{\colortbl;\red255\green255\blue255;}
{\*\listtable{\list\listtemplateid1\listhybrid{\listlevel\levelnfc0\levelnfcn0\leveljc0\leveljcn0\levelfollow0\levelstartat1\levelspace360\levelindent0{\*\levelmarker \{decimal\}.}{\leveltext\leveltemplateid1\'02\'00.;}{\levelnumbers\'01;}\fi-360\li720\lin720 }{\listname ;}\listid1}}
{\*\listoverridetable{\listoverride\listid1\listoverridecount0\ls1}}
\paperw11900\paperh16840\margl1440\margr1440\vieww11680\viewh11900\viewkind0
\pard\tx220\tx720\tx1133\tx1700\tx2267\tx2834\tx3401\tx3968\tx4535\tx5102\tx5669\tx6236\tx6803\li720\fi-720\ql\qnatural\pardirnatural
\ls1\ilvl0
\f0\fs24 \cf0 {\listtext 1. }\'d0\'ec\'d6\'f9\'a3\'ac\
{\listtext 2. }\'c0\'ee\'cf\'fe\'c0\'f2\'a3\'ac\
{\listtext 3. }\'d5\'d4\'d6\'f9\'a3\'ac\
{\listtext 4. }\'d6\'ec\'b9\'e3\'b7\'ef\'a3\'ac\
{\listtext 5. }\'d0\'bb\'c0\'e8\'c0\'e8\'a3\'ac\
{\listtext 6. }\'d6\'ec\'ce\'b0\'a3\'ac\
{\listtext 7. }\'d5\'c5\'b5\'a4\'b5\'a4\'a3\'ac\
{\listtext 8. }\'c0\'ee\'b5\'a4\'a3\'ac\
{\listtext 9. }\'b2\'b7\'ce\'c4\'c4\'c8\'a3\'ac\
{\listtext 10. }\'d6\'dc\'ca\'ab\'a3\'ac\
{\listtext 11. }\'d5\'d4\'cd\'ee\'b6\'ab\'a3\'ac\
{\listtext 12. }\'b9\'f9\'b7\'ad\'b5\'dc\'a3\'ac\
{\listtext 13. }\'cc\'c6\'be\'fc\'b2\'a8\'a3\'ac\
{\listtext 14. }\'b8\'df\'cf\'e8\'a3\'ac\
{\listtext 15. }\'d1\'a6\'c3\'cd\'a3\'ac\
{\listtext 16. }\'ba\'fa\'b3\'a4\'ba\'d8\'a3\'ac\
{\listtext 17. }\'c0\'ee\'c7\'bf\'a3\'ac\
{\listtext 18. }\'c0\'ee\'bb\'d4\'a3\'ac\
{\listtext 19. }\'c8\'ce\'c5\'f4\'a3\'ac\
{\listtext 20. }\'d7\'db\'bc\'d2\'c7\'ed\'a3\'ac\
{\listtext 21. }\'b6\'a1\'d1\'de\'a3\'ac\
{\listtext 22. }\'b6\'ad\'c1\'c1\'a3\'ac\
{\listtext 23. }\'c3\'ab\'ba\'ec\'a3\'ac\
{\listtext 24. }\'c5\'ed\'c7\'e5\'d4\'b4\'a3\'ac\
{\listtext 25. }\'c3\'cf\'c7\'ec\'bf\'c2\'a3\'ac\
{\listtext 26. }\'c0\'ee\'b7\'e5\'a3\'ac\
{\listtext 27. }\'c2\'ed\'c1\'d6\'a3\'ac\
{\listtext 28. }\'b3\'c9\'b3\'cc\'a3\'ac\
{\listtext 29. }\'b7\'b6\'c9\'dc\'b4\'a8\'a3\'ac\
{\listtext 30. }\'c4\'aa\'b7\'bc\'a3\'ac\
{\listtext 31. }\'b6\'a1\'cc\'ce\'a3\'ac\
{\listtext 32. }\'b6\'c5\'d3\'b0\'a3\'ac\
{\listtext 33. }\'c0\'ee\'b7\'ef\'a3\'ac\
{\listtext 34. }\'ba\'ab\'c3\'f7\'c3\'f7\'a3\'ac\
{\listtext 35. }\'c9\'f2\'cf\'fe\'c0\'f6\'a3\'ac\
{\listtext 36. }\'d0\'ed\'b3\'c9\'a3\'ac\
{\listtext 37. }\'d5\'d4\'d5\'f1\'a3\'ac\
{\listtext 38. }\'c2\'de\'be\'b2\'a3\'ac\
{\listtext 39. }\'b6\'c5\'d3\'b0\'a3\'ac\
{\listtext 40. }\'d6\'ec\'b7\'ef\'a3\'ac\
{\listtext 41. }\'cb\'e5\'c0\'f6\'a3\'ac\
{\listtext 42. }\'ba\'fa\'d1\'f4\'d1\'f4\'a3\'a8\'ba\'fa\'d5\'f1\'a3\'a9\'a3\'ac\
{\listtext 43. }\'c2\'ed\'cf\'fe\'c3\'b7\
{\listtext 44. }\'b5\'cb\'b1\'f3\'a3\'ac\
{\listtext 45. }\'cf\'ee\'c3\'ce\'e9\'aa\'a3\'ac\
{\listtext 46. }\'d1\'ee\'c1\'f8\'a3\'ac\
{\listtext 47. }\'b8\'f0\'ea\'bb\
}{\rtf1\ansi\ansicpg936\cocoartf1038\cocoasubrtf360
{\fonttbl\f0\fnil\fcharset134 STHeitiSC-Light;}
{\colortbl;\red255\green255\blue255;}
{\*\listtable{\list\listtemplateid1\listhybrid{\listlevel\levelnfc0\levelnfcn0\leveljc0\leveljcn0\levelfollow0\levelstartat1\levelspace360\levelindent0{\*\levelmarker \{decimal\}.}{\leveltext\leveltemplateid1\'02\'00.;}{\levelnumbers\'01;}\fi-360\li720\lin720 }{\listname ;}\listid1}}
{\*\listoverridetable{\listoverride\listid1\listoverridecount0\ls1}}
\paperw11900\paperh16840\margl1440\margr1440\vieww11680\viewh11900\viewkind0
\pard\tx220\tx720\tx1133\tx1700\tx2267\tx2834\tx3401\tx3968\tx4535\tx5102\tx5669\tx6236\tx6803\li720\fi-720\ql\qnatural\pardirnatural
\ls1\ilvl0
\f0\fs24 \cf0 {\listtext 1. }\'d0\'ec\'d6\'f9\'a3\'ac\
{\listtext 2. }\'c0\'ee\'cf\'fe\'c0\'f2\'a3\'ac\
{\listtext 3. }\'d5\'d4\'d6\'f9\'a3\'ac\
{\listtext 4. }\'d6\'ec\'b9\'e3\'b7\'ef\'a3\'ac\
{\listtext 5. }\'d0\'bb\'c0\'e8\'c0\'e8\'a3\'ac\
{\listtext 6. }\'d6\'ec\'ce\'b0\'a3\'ac\
{\listtext 7. }\'d5\'c5\'b5\'a4\'b5\'a4\'a3\'ac\
{\listtext 8. }\'c0\'ee\'b5\'a4\'a3\'ac\
{\listtext 9. }\'b2\'b7\'ce\'c4\'c4\'c8\'a3\'ac\
{\listtext 10. }\'d6\'dc\'ca\'ab\'a3\'ac\
{\listtext 11. }\'d5\'d4\'cd\'ee\'b6\'ab\'a3\'ac\
{\listtext 12. }\'b9\'f9\'b7\'ad\'b5\'dc\'a3\'ac\
{\listtext 13. }\'cc\'c6\'be\'fc\'b2\'a8\'a3\'ac\
{\listtext 14. }\'b8\'df\'cf\'e8\'a3\'ac\
{\listtext 15. }\'d1\'a6\'c3\'cd\'a3\'ac\
{\listtext 16. }\'ba\'fa\'b3\'a4\'ba\'d8\'a3\'ac\
{\listtext 17. }\'c0\'ee\'c7\'bf\'a3\'ac\
{\listtext 18. }\'c0\'ee\'bb\'d4\'a3\'ac\
{\listtext 19. }\'c8\'ce\'c5\'f4\'a3\'ac\
{\listtext 20. }\'d7\'db\'bc\'d2\'c7\'ed\'a3\'ac\
{\listtext 21. }\'b6\'a1\'d1\'de\'a3\'ac\
{\listtext 22. }\'b6\'ad\'c1\'c1\'a3\'ac\
{\listtext 23. }\'c3\'ab\'ba\'ec\'a3\'ac\
{\listtext 24. }\'c5\'ed\'c7\'e5\'d4\'b4\'a3\'ac\
{\listtext 25. }\'c3\'cf\'c7\'ec\'bf\'c2\'a3\'ac\
{\listtext 26. }\'c0\'ee\'b7\'e5\'a3\'ac\
{\listtext 27. }\'c2\'ed\'c1\'d6\'a3\'ac\
{\listtext 28. }\'b3\'c9\'b3\'cc\'a3\'ac\
{\listtext 29. }\'b7\'b6\'c9\'dc\'b4\'a8\'a3\'ac\
{\listtext 30. }\'c4\'aa\'b7\'bc\'a3\'ac\
{\listtext 31. }\'b6\'a1\'cc\'ce\'a3\'ac\
{\listtext 32. }\'b6\'c5\'d3\'b0\'a3\'ac\
{\listtext 33. }\'c0\'ee\'b7\'ef\'a3\'ac\
{\listtext 34. }\'ba\'ab\'c3\'f7\'c3\'f7\'a3\'ac\
{\listtext 35. }\'c9\'f2\'cf\'fe\'c0\'f6\'a3\'ac\
{\listtext 36. }\'d0\'ed\'b3\'c9\'a3\'ac\
{\listtext 37. }\'d5\'d4\'d5\'f1\'a3\'ac\
{\listtext 38. }\'c2\'de\'be\'b2\'a3\'ac\
{\listtext 39. }\'b6\'c5\'d3\'b0\'a3\'ac\
{\listtext 40. }\'d6\'ec\'b7\'ef\'a3\'ac\
{\listtext 41. }\'cb\'e5\'c0\'f6\'a3\'ac\
{\listtext 42. }\'ba\'fa\'d1\'f4\'d1\'f4\'a3\'a8\'ba\'fa\'d5\'f1\'a3\'a9\'a3\'ac\
{\listtext 43. }\'c2\'ed\'cf\'fe\'c3\'b7\
{\listtext 44. }\'b5\'cb\'b1\'f3\'a3\'ac\
{\listtext 45. }\'cf\'ee\'c3\'ce\'e9\'aa\'a3\'ac\
{\listtext 46. }\'d1\'ee\'c1\'f8\'a3\'ac\
{\listtext 47. }\'b8\'f0\'ea\'bb\
}
rtf中存放的是中文。
==============文章结束=============
今天所学习到的地方就到这里。
- hadoop-0.21.0-eclipse-plugin-3.6.jar (4.2 MB)
- 下载次数: 129
评论
3 楼
struts2coding
2012-12-28
谢谢,楼主,终于搭建好
2 楼
liuwenbo200285
2012-08-30
liuwenbo200285 写道
wenbomac:bin root# hadoop namenode -format
-sh: hadoop: command not found
这个是什么原因引起的?
-sh: hadoop: command not found
这个是什么原因引起的?
一定要bin/start-all.sh才没有问题,晕死!
1 楼
liuwenbo200285
2012-08-29
wenbomac:bin root# hadoop namenode -format
-sh: hadoop: command not found
这个是什么原因引起的?
-sh: hadoop: command not found
这个是什么原因引起的?
相关推荐
Eclipse集成Hadoop2.10.0的插件,使用`ant`对hadoop的jar包进行打包并适应Eclipse加载,所以参数里有hadoop和eclipse的目录. 必须注意对于不同的hadoop版本,` HADDOP_INSTALL_PATH/share/hadoop/common/lib`下的jar包...
Hadoop-0.21.0分布式集群配置.doc
Hadoop技术-Hadoop伪分布式安装.pptx
【IT十八掌徐培成】Hadoop第01天-04.hadoop配置独立模式-伪分布式.zip
Hadoop安装教程_单机_伪分布式配置
hadoop-eclipse-plugin-2.7.4.jar和hadoop-eclipse-plugin-2.7.3.jar还有hadoop-eclipse-plugin-2.6.0.jar的插件都在这打包了,都可以用。
hadoop-eclipse-plugin-1.2.1hadoop-eclipse-plugin-1.2.1hadoop-eclipse-plugin-1.2.1hadoop-eclipse-plugin-1.2.1
hadoop-eclipse-plugin-2.7.3和2.7.7的jar包 hadoop-eclipse-plugin-2.7.3和2.7.7的jar包 hadoop-eclipse-plugin-2.7.3和2.7.7的jar包 hadoop-eclipse-plugin-2.7.3和2.7.7的jar包
hadoop-eclipse-plugin-3.1.1, hadoop eclipse 插件 3.1.1
hadoop-eclipse-plugin-3.1.3,eclipse版本为eclipse-jee-2020-03
用来配置myeclipse或eclipse对应的hadoop 插件,方便开发
Hadoop2.7.5-HBase1.2.6伪分布式安装文档,非常全面 适合新手入门搭建测试,1积分贱卖。童叟无欺哟(^U^)
hadoop-eclipse-plugin-2.6.0和hadoop-eclipse-plugin-2.7.3的jar包,亲测可用用。 将插件hadoop-eclipse-plugin-2.6.0.jar,从/data/hadoop3目录下,拷贝到/apps/eclipse/plugins的插件目录下。 cp /data/hadoop3...
Hadoop2.7.5-HBase1.2.6伪分布式安装文档。 高清pdf,完整书签。 来自博客:http://blog.csdn.net/u011669700/article/details/79470769
环境配置是:hadoop 2.4.0+ eclipse 4.3.2+mac os+10.9.4 eclipse插件
找不到与hadoop-2.9.2版本对应的插件,手动生成的hadoop-eclipse-plugin-2.9.2版本,
在eclipse中搭建hadoop环境,需要安装hadoop-eclipse-pulgin的插件,根据hadoop的版本对应jar包的版本,此为hadoop3.1.2版本的插件。
最新的hadoop-eclipse-plugin-2.7.4.jar 很好用的hadoop的eclipse插件。自己编译的。 经过测试,使用没有任何问题。 请各位放心使用
不是hadoop-0.21.0中自带的eclipse插件,是经过我配置后的,可用于eclipse的中进行开发的插件(9.2MB),因为这个版本自带的这个文件(1.5MB)不可用。 将此文件放入:/usr/share/eclipse/dropins 中即可使用!!! 在...
eclipse hadoop插件 ,基于hadoop2.8.2,亲测可用,不想用分下载的可以直接到这位大大的地盘去下载编译:https://github.com/jiaoyilun/hadoop2x-eclipse-plugin