1)安装JAVA
2)SSH免密码
3)解压hadoop
这些安装与 1.03 一样。
1.03 的安装,请参照: http://pftzzg.iteye.com/blog/1910153
4)设置
a)
[root@centerOsMaster home]# vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.6.0_31
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export TERM=linux
export HADOOP_HOME=/hadoop/hadoop-0.20.2
export PATH=$HADOOP_HOME/bin:$PATH
[root@centerOsMaster home]# source /etc/profile
b)
[hadoop@centerOsMaster conf]$ vim hadoop-env.sh
c)
[hadoop@centerOsMaster conf]$ vim core-site.xml
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/hadoop/data/tmp</value> </property> <property> <name>fs.default.name</name> <value>hdfs://centerOsMaster:9000</value> </property> </configuration>
d)
[hadoop@centerOsMaster conf]$ vi hdfs-site.xml
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.name.dir</name> <value>/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/hadoop/data/data</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
e)
[hadoop@centerOsMaster conf]$ vi mapred-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>hdfs://centerOsMaster:9001</value> </property> <property> <name>mapred.local.dir</name> <value>/hadoop/data/local</value> </property> </configuration>
6)使用
[hadoop@centerOsMaster conf]$ mkdir -p /hadoop/data/local
[hadoop@centerOsMaster conf]$ mkdir -p /hadoop/data/name
[hadoop@centerOsMaster conf]$ mkdir -p /hadoop/data/data
[hadoop@centerOsMaster conf]$ mkdir -p /hadoop/data/tmp
[hadoop@centerOsMaster bin]$ hadoop namenode -format
12/12/08 23:47:19 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = centerOsMaster/192.168.80.55
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /hadoop/data/name ? (Y or N) Y
12/12/08 23:47:21 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
12/12/08 23:47:21 INFO namenode.FSNamesystem: supergroup=supergroup
12/12/08 23:47:21 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/12/08 23:47:21 INFO common.Storage: Image file of size 96 saved in 0 seconds.
12/12/08 23:47:21 INFO common.Storage: Storage directory /hadoop/data/name has been successfully formatted.
12/12/08 23:47:21 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at centerOsMaster/192.168.80.55
************************************************************/
这是是大写。是区分大小写的。
[hadoop@centerOsMaster bin]$ ./start-all.sh
starting namenode, logging to /hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-centerOsMaster.out
localhost: starting datanode, logging to /hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-centerOsMaster.out
localhost: starting secondarynamenode, logging to /hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-centerOsMaster.out
starting jobtracker, logging to /hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-centerOsMaster.out
localhost: starting tasktracker, logging to /hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-centerOsMaster.out
[hadoop@centerOsMaster bin]$
总结:1)注意端口
2)注意版本
集群的安装请参考:http://pftzzg.iteye.com/blog/1910171
hadoop-0.20 请参考:http://pftzzg.iteye.com/admin/blogs/1911023
1.03 请参考:http://pftzzg.iteye.com/admin/blogs/1910153
相关推荐
赠送jar包:parquet-hadoop-1.8.2.jar; 赠送原API文档:parquet-hadoop-1.8.2-javadoc.jar; 赠送源代码:parquet-hadoop-1.8.2-sources.jar; 赠送Maven依赖信息文件:parquet-hadoop-1.8.2.pom; 包含翻译后的API...
flink-shaded-hadoop-3下载
Apache Spark版本3.1.3。Linux安装包。spark-3.1.3-bin-hadoop3.2.tgz
本资源是spark-2.0.0-bin-hadoop2.6.tgz百度网盘资源下载,本资源是spark-2.0.0-bin-hadoop2.6.tgz百度网盘资源下载
spark-3.0.0-bin-hadoop3.2下载安装包
flink-1.0.3-bin-hadoop27-scala_2flink-1.0.3-bin-hadoop27-scala_2
hadoop-0.20.203.0的eclipse插件: hadoop-eclipse-plugin-0.20.203.jar
Spark安装包:spark-3.1.3-bin-without-hadoop.tgz
spark-3.0.0-bin-hadoop2.7.tgz 官网下载不了的,需要资源的,可以到这里下载哦
# 解压命令 tar -zxvf flink-shaded-hadoop-2-uber-3.0.0-cdh6.2.0-7.0.jar.tar.gz # 介绍 用于CDH部署 Flink所依赖的jar包
spark-3.2.0-bin-hadoop3.2.tgz
flink-shaded-hadoop-2-uber-2.7.5-10.0.jar
linux的spark新版本,匹配hadoop2.7版本,spark-3.2.1-bin-hadoop2.7.tgz
pyspark本地的环境配置包,spark-2.3.4-bin-hadoop2.7.tgz:spark-2.3.4-bin-hadoop2.7.tgz
spark-3.2.4-bin-hadoop3.2-scala2.13 安装包
flink-shaded-hadoop-2-uber-2.7.5-10.0
spark-2.4.0-bin-hadoop2.7
赠送jar包:flink-hadoop-compatibility_2.11-1.10.0.jar; 赠送原API文档:flink-hadoop-compatibility_2.11-1.10.0-javadoc.jar; 赠送源代码:flink-hadoop-compatibility_2.11-1.10.0-sources.jar; 赠送Maven...
spark-2.3.0-bin-hadoop2.7版本.zip
spark-assembly-1.5.2-hadoop2.6.0 在spark编程中使用的一个jar