`

hadoop0.20.2配置 in linux(ubuntu)

 
阅读更多

配置ssh

创建密钥,这里p后面是空密码,不推荐使用空密码

ssh-keygen -t rsa -P ''

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

配置完后,执行一下 ssh localhost, 确认你的机器可以用 SSH 连接,并且连接时不需要手工输入密码

下载hadoop

wgethttp://mirror.bjtu.edu.cn/apache/hadoop/common/hadoop-0.20.2/hadoop-0.20.2.tar.gz

tar -xvf hadoop-0.20.2.tar.gz

增加环境变量

在/etc/environment

HADOOP_HOME=目录路径

JAVA_HOME=jdk路径

在/etc/profile中增加

exportHADOOP_HOME=目录路径

export JAVA_HOME=jdk路径

修改hadoop配置文件

修改$HADOOP_HOME/conf/hadoop-env.sh

#export JAVA_HOME=

改为export JAVA_HOME=jdk路径

设置HDFS目录

mkdir /$HOME/tmp

chmod -R 777 /$HOME/tmp

修改/conf/core-site.xml文件

增加

<property>
<name>hadoop.tmp.dir</name>
<value>/home/du/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>

修改conf/mapred-site.xml

<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>

修改conf/hdfs-site.xml

增加

<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>

格式化文件系统

hadoop namenode -format

启动伪分布式

$HADOOP_HOME/bin/start-all.sh

检查是否启动成功

jps

18160 SecondaryNameNode
17777 NameNode
17970 DataNode
18477 Jps
18409 TaskTracker
18231 JobTracker


停止

$HADOOP_HOME/bin/stop-all.sh

hadoop web访问接口

http://localhost:50030/job tracker访问

http://localhost:50060/ task tracker访问

http://localhost:50070/name node访问


分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics