说明:
操作系统CentOS6.5 64位
jdk1.7+
hadoop2.5.0
重要配置部分:
#hosts文件配置 /etc/hosts 其中163为主节点 其他为从节点
192.168.100.163 master
192.168.100.165 node1
192.168.100.166 node2
192.168.100.167 node3
#ssh配置
#1.生成主节点用户的SSH公钥
[root@master ~]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
#2.本机将~/.ssh/id_dsa.pub添加~/.ssh/authorized_keys文件中
[root@master ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
#3.测试本机ssh无密码登陆
[root@master ~]# ssh localhost
#4.将~/.ssh/id_dsa.pub添加到目标机器的~/.ssh/authorized_keys文件中
[root@master ~]# scp -r ~/.ssh/id_dsa.pub root@node1:.ssh/
[root@master ~]# ssh node1
[root@node1 ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
node2,node3重复#4的操作!
#hadoop jdk安装步骤省略
#hadoop 配置
#1. hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.7.0_67
#2. yarn-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_67
#3. core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/founder/tmp</value>
<description>A base for other temporary directories.</description>
</property>
</configuration>
#4.hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoop/hdfs/name</value>
<final>true</final>
</property>
<property>
<name>dfs.dataname.data.dir</name>
<value>file:///home/hadoop/hdfs/data</value>
<final>true</final>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
#5. mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
#6. slaves
node1
node2
node3
#7. yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
<description>The hostname of the RM.</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
# hadoop环境变量设置 /etc/profile
export HADOOP_DEV_HOME=/founder/hadoop-2.5.0
export PATH=$PATH:$HADOOP_DEV_HOME/bin
export PATH=$PATH:$HADOOP_DEV_HOME/sbin
export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME}
export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}
export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}
export YARN_HOME=${HADOOP_DEV_HOME}
export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_DEV_HOME}/lib/native
export JAVA_LIBRARY_PATH=${HADOOP_DEV_HOME}/lib/native
#java setting
export JAVA_HOME=/usr/java/jdk1.7.0_67
export PATH=$JAVA_HOME/bin:$PATH
#hadoop调试信息 方便错误调试!临时环境变量即可
export HADOOP_ROOT_LOGGER=DEBUG,console
web地址
http://master:8088
分享到:
相关推荐
详细记录基于Hadoop2.5.1的集群的安装过程,集群组件包含:JDK、Hadoop、Hive、ZK、MySql、Sqoop
hadoop2.5.0集群环境部署文档.txt 是公司在部署hadoop 原生版本总结的资料,方便大家在今后部署hadoop 大数据平台的时候手忙脚乱的去从网上收部署资料,而且部署资料有的也不大全面,在此把总结上传一份。...
hadoop2.5.0-eclipse插件
linux 系统为centos6.5 hadoop版本:2.5.0 cdh3.5.6环境下使用
hadoop-2.5.0-eclipse插件
编译环境hadoop2.5.0 ,snappy1.1.3,linux Centos 6.4。 注意不支持CDH版本,CDH版本会报错 用法:替换native 文件夹即可
hadoop2.5.0 snappy编译jar包,解压到hadoop native目录下即可
Win7中使用Eclipse连接虚拟机中的Linux中的Hadoop2.5.0经验总结. 我自己测试在Hadoop2.5.0,但应该也能解决2.5.0以上版本问题,文档中包含解决步骤及所需要的包,如hadoop.dll及winutils.exe。还有一个修改过的...
spark-assembly-1.3.0-hadoop2.5.0-cdh5.3.0.jar的下载地址和提取码
native(hadoop-2.5.0-cdh5.2.0).tar 已经编译过的本地库文件
这是spring的一个jar包,必不可少,必须需要,自取吧,我用了,还行
hadoop-2.5.0-cdh5.3.6-src.tar.gz .hadoop-2.5.0-cdh5.3.6-src.tar.gz
hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2...
详尽的hadoop-2.7.3安装配置文档,包括ssh jdk Hadoop linux
hadoop-2.5.0-src.tar.gzhadoop-2.5.0-src.tar.gzhadoop-2.5.0-src.tar.gzhadoop-2.5.0-src.tar.gzhadoop-2.5.0-src.tar.gz
最近在学习大数据,自己手动编译native包。 操作环境: 1. Centos 6.5 2. hadoop-2.5.0-cdh5.3.6 现传上编译步骤和native包
hadoop3.0.0安装和配置hadoop3.0.0安装和配置hadoop3.0.0安装和配置hadoop3.0.0安装和配置hadoop3.0.0安装和配置hadoop3.0.0安装和配置hadoop3.0.0安装和配置hadoop3.0.0安装和配置hadoop3.0.0安装和配置
hadoop快速入门.doc hadoop快速入门.doc hadoop快速入门.doc
hadoop&hive安装配置。3台机器,master做为namenonde,将slave1和slave2做为datanode。
hadoop从入门到精通课件pdf,手把手带你飞(yarn,hdfs,mapreduce)