- 浏览: 32025 次
- 性别:
- 来自: 西安
文章分类
最新评论
没有安装过集群的朋友,可能没有发现,hadoop版本没有64位的,我们在安装hadoop之前需要将hadoop源码包进行编译,否则lib下的部分jar包无法使用【有人可能会说hadoop不分操作系统的bit数,这个问题我有怎么会悄悄告诉你呢!!!!哈哈,开玩笑,接下来,给大家分享一下我第一次编译出现的糗事】
如果不编译会出现啥问题呢??你可以看俺
遇到的问题描述:
[root@db96 hadoop]# hadoop dfs -put ./in
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
14/07/17 17:07:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
put: `./in': No such file or directory
原因查找:
查看本地文件:
[root@db96 hadoop]# file /usr/local/hadoop/lib/native/libhadoop.so.1.0.0
/usr/local/hadoop/lib/native/libhadoop.so.1.0.0: ELF 32-bit LSB shared object,
Intel 80386, version 1 (SYSV), dynamically linked, not stripped
是32位的hadoop,安装在了64位的linux系统上。lib包编译环境不一样,所以不能使用。
悲剧了,装好的集群没法用
【编译环境准备】
要重新指定yum源
网上下载了version-groups.conf到/etc/yum/,删除系统原有的源文件
1. 安装必要的包:
[root@db99 data]# yum install autoconfautomake libtool cmake ncurses-devel openssl-devel gcc* --nogpgcheck
2. 安装maven,下载并解压。(版本不能比这新,有压缩包)
http://maven.apache.org/download.cgi //下载对应的压缩包
apache-maven-3.2.1-bin.tar
[root@db99 ~]# tar -xvf apache-maven-3.2.1-bin.tar
[root@db99 ~]# tar -xvf apache-maven-3.2.1-bin.tar
[root@db99 ~]# ln -s /usr/local/apache-maven-3.2.1/ /usr/local/maven
[root@db99 local]# vim /etc/profile //添加环境变量中
export MAVEN_HOME=/usr/local/maven
export PATH=$MAVEN_HOME/bin:$PATH
3. 安装protobuf(版本不要变,有压缩包)
https://code.google.com/p/protobuf/downloads/detail?name=protobuf-2.5.0.tar.gz
下载:protobuf-2.5.0.tar.gz 并解压
[root@db99 protobuf-2.5.0]# pwd
/root/protobuf-2.5.0
[root@db99 protobuf-2.5.0]# ./configure --prefix=/usr/local/protoc/
[root@db99 protobuf-2.5.0]# make
[root@db99 protobuf-2.5.0]# make check
[root@db99 protobuf-2.5.0]# make install
bin目录下执行 protoc --version
libprotoc 2.5.0
安装成功。
添加环境变量:
vi /etc/profile
export MAVEN_HOME=/usr/local/maven
export JAVA_HOME=/usr/java/latest
export HADOOP_HOME=/usr/local/hadoop
export PATH=.:/usr/local/protoc/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
4. 编译hadoop(将hadoop源码包解压就会出现如下的包)
[root@db99 release-2.2.0]# pwd
/data/release-2.2.0
[root@db99 release-2.2.0]# ls
BUILDING.txt hadoop-common-project hadoop-maven-plugins hadoop-tools
dev-support hadoop-dist hadoop-minicluster hadoop-yarn-project
hadoop-assemblies hadoop-hdfs-project hadoop-project pom.xml
hadoop-client hadoop-mapreduce-project hadoop-project-dist
[root@db99 release-2.2.0]# mvn package -Pdist,native -DskipTests -Dtar (往后看不要急着编译,这是maven命令)
..............编译需要较长时间大概1个小时左右。
如果出现如下错误:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile (default-testCompile) on project hadoop-auth: Compilation failure: Compilation failure:
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[88,11] error: cannot access AbstractLifeCycle
[ERROR] class file for org.mortbay.component.AbstractLifeCycle not found
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[96,29] error: cannot access LifeCycle
[ERROR] class file for org.mortbay.component.LifeCycle not found
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[98,10] error: cannot find symbol
[ERROR] symbol: method start()
[ERROR] location: variable server of type Server
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[104,12] error: cannot find symbol
[ERROR] -> [Help 1]
需要修改源码下边的hadoop-common-project/hadoop-auth/pom.xml
vi ~/hadoop-common-project/hadoop-auth/pom.xml
[root@db99 release-2.2.0]# vim /data/release-2.2.0/hadoop-common-project/hadoop-auth/pom.xml
在第55行下添加:
56 <dependency>
57 <groupId>org.mortbay.jetty</groupId>
58 <artifactId>jetty-util</artifactId>
59 <scope>test</scope>
60 </dependency>
保存退出,重新编译即可。
最后编译成功:
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minicluster ---
[INFO] Building jar: /data/release-2.2.0/hadoop-minicluster/target/hadoop-minicluster-2.2.0-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [ 1.386 s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [ 1.350 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [ 2.732 s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [ 0.358 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 2.048 s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [ 3.450 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [ 16.114 s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 13.317 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [05:22 min]
[INFO] Apache Hadoop NFS ................................. SUCCESS [ 16.925 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [ 0.044 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [02:51 min]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 28.601 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [ 27.589 s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [ 3.966 s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.044 s]
[INFO] hadoop-yarn ....................................... SUCCESS [ 52.846 s]
[INFO] hadoop-yarn-api ................................... SUCCESS [ 41.700 s]
[INFO] hadoop-yarn-common ................................ SUCCESS [ 25.945 s]
[INFO] hadoop-yarn-server ................................ SUCCESS [ 0.105 s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [ 8.436 s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 15.659 s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [ 3.647 s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [ 12.495 s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [ 0.684 s]
[INFO] hadoop-yarn-client ................................ SUCCESS [ 5.266 s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [ 0.102 s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [ 2.666 s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [ 0.093 s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [ 20.092 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [ 2.783 s]
[INFO] hadoop-yarn-site .................................. SUCCESS [ 0.225 s]
[INFO] hadoop-yarn-project ............................... SUCCESS [ 36.636 s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [ 16.645 s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [ 3.058 s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [ 9.441 s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [ 5.482 s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [ 7.615 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [ 2.473 s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [ 6.183 s]
[INFO] hadoop-mapreduce .................................. SUCCESS [ 6.454 s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [ 4.802 s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 27.635 s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [ 2.850 s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [ 6.092 s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [ 4.742 s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [ 3.155 s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [ 3.317 s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [ 9.791 s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [ 2.680 s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [ 0.036 s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [ 20.765 s]
[INFO] Apache Hadoop Client .............................. SUCCESS [ 6.476 s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [ 0.215 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 16:32 min
[INFO] Finished at: 2014-07-18T01:18:24+08:00
[INFO] Final Memory: 117M/314M
[INFO] ------------------------------------------------------------------------
此时编译好的文件位于 ~/hadoop-dist/target/hadoop-2.2.0/ 目录中
拷贝hadoop-2.2.0到安装目录下,/usr/local/ 重新修改其配置文件,重新并格式化,启动,即可。
到此已经不会报错,可以使用。
[root@db96 hadoop]# hadoop dfs -put ./in
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
put: `.': No such file or directory
[root@db96 hadoop]# file /usr/local/hadoop/lib/native/libhadoop.so.1.0.0
/usr/local/hadoop/lib/native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
测试使用:上传一个文件,下载一个文件,查看上传文件的内容:
[root@db96 ~]# cat wwn.txt
# This is a text txt
# by coco
# 2014-07-18
[root@db96 ~]# hdfs dfs -mkdir /test
[root@db96 ~]# hdfs dfs -put wwn.txt /test
[root@db96 ~]# hdfs dfs -cat /test/wwn.txt
[root@db96 ~]# hdfs dfs -get /test/wwn.txt /tmp
[root@db96 hadoop]# hdfs dfs -rm /test/wwn.txt
[root@db96 tmp]# ll
总用量 6924
-rw-r--r-- 1 root root 70 7月 18 11:50 wwn.txt
[root@db96 ~]# hadoop dfs -ls /test
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Found 2 items
-rw-r--r-- 2 root supergroup 6970105 2014-07-18 11:44 /test/gc_comweight.txt
-rw-r--r-- 2 root supergroup 59 2014-07-18 14:56 /test/hello.txt
到此我们的hdfs文件系统已经能正常使用。
按照在64位CentOS上编译 Hadoop 2.2.0的步骤,进行对hadoop2.2在rhel6.2上进行编译,
cd hadoop-2.2.0-src
mvn package -DskipTests -Pdist,native -Dtar
大概10分钟左右时报错如下:
Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on project hadoop-project: Execution ……………………
使用以下命令从新编译:
mvn package -DskipTests -Pdist,native -Dtar -Dmaven.javadoc.skip=true
大概40分钟后,编译完成
INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-minicluster ---
[INFO] No sources in project. Archive not created.
[INFO]
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ hadoop-minicluster ---
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minicluster ---
[INFO] Skipping javadoc generation
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [ 2.978 s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [ 8.844 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [01:58 min]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [ 0.616 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 43.968 s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [ 46.198 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [16:14 min]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 11.632 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [04:25 min]
[INFO] Apache Hadoop NFS ................................. SUCCESS [ 54.758 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [ 0.055 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [02:11 min]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 11.402 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [01:19 min]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [ 3.266 s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.032 s]
[INFO] hadoop-yarn ....................................... SUCCESS [01:21 min]
[INFO] hadoop-yarn-api ................................... SUCCESS [ 43.147 s]
[INFO] hadoop-yarn-common ................................ SUCCESS [01:00 min]
[INFO] hadoop-yarn-server ................................ SUCCESS [ 0.084 s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [ 4.851 s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 42.176 s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [ 0.837 s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [ 5.910 s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [ 1.097 s]
[INFO] hadoop-yarn-client ................................ SUCCESS [ 1.100 s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [ 0.083 s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [ 1.529 s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [ 0.110 s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [ 7.229 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [ 0.806 s]
[INFO] hadoop-yarn-site .................................. SUCCESS [ 0.310 s]
[INFO] hadoop-yarn-project ............................... SUCCESS [ 18.240 s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [ 3.875 s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [ 0.911 s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [ 2.599 s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [ 1.372 s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [ 4.471 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [ 0.676 s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [ 1.054 s]
[INFO] hadoop-mapreduce .................................. SUCCESS [ 7.665 s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [ 1.322 s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 21.522 s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [ 0.677 s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [ 1.394 s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [ 2.027 s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [ 0.727 s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [ 1.062 s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [ 9.102 s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [ 4.286 s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [ 0.024 s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [ 8.841 s]
[INFO] Apache Hadoop Client .............................. SUCCESS [ 16.353 s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [ 0.212 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 35:15 min
[INFO] Finished at: 2014-11-25T22:43:39+08:00
[INFO] Final Memory: 95M/368M
[INFO] ------------------------------------------------------------------------
如果不编译会出现啥问题呢??你可以看俺
遇到的问题描述:
[root@db96 hadoop]# hadoop dfs -put ./in
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
14/07/17 17:07:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
put: `./in': No such file or directory
原因查找:
查看本地文件:
[root@db96 hadoop]# file /usr/local/hadoop/lib/native/libhadoop.so.1.0.0
/usr/local/hadoop/lib/native/libhadoop.so.1.0.0: ELF 32-bit LSB shared object,
Intel 80386, version 1 (SYSV), dynamically linked, not stripped
是32位的hadoop,安装在了64位的linux系统上。lib包编译环境不一样,所以不能使用。
悲剧了,装好的集群没法用
【编译环境准备】
要重新指定yum源
网上下载了version-groups.conf到/etc/yum/,删除系统原有的源文件
1. 安装必要的包:
[root@db99 data]# yum install autoconfautomake libtool cmake ncurses-devel openssl-devel gcc* --nogpgcheck
2. 安装maven,下载并解压。(版本不能比这新,有压缩包)
http://maven.apache.org/download.cgi //下载对应的压缩包
apache-maven-3.2.1-bin.tar
[root@db99 ~]# tar -xvf apache-maven-3.2.1-bin.tar
[root@db99 ~]# tar -xvf apache-maven-3.2.1-bin.tar
[root@db99 ~]# ln -s /usr/local/apache-maven-3.2.1/ /usr/local/maven
[root@db99 local]# vim /etc/profile //添加环境变量中
export MAVEN_HOME=/usr/local/maven
export PATH=$MAVEN_HOME/bin:$PATH
3. 安装protobuf(版本不要变,有压缩包)
https://code.google.com/p/protobuf/downloads/detail?name=protobuf-2.5.0.tar.gz
下载:protobuf-2.5.0.tar.gz 并解压
[root@db99 protobuf-2.5.0]# pwd
/root/protobuf-2.5.0
[root@db99 protobuf-2.5.0]# ./configure --prefix=/usr/local/protoc/
[root@db99 protobuf-2.5.0]# make
[root@db99 protobuf-2.5.0]# make check
[root@db99 protobuf-2.5.0]# make install
bin目录下执行 protoc --version
libprotoc 2.5.0
安装成功。
添加环境变量:
vi /etc/profile
export MAVEN_HOME=/usr/local/maven
export JAVA_HOME=/usr/java/latest
export HADOOP_HOME=/usr/local/hadoop
export PATH=.:/usr/local/protoc/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
4. 编译hadoop(将hadoop源码包解压就会出现如下的包)
[root@db99 release-2.2.0]# pwd
/data/release-2.2.0
[root@db99 release-2.2.0]# ls
BUILDING.txt hadoop-common-project hadoop-maven-plugins hadoop-tools
dev-support hadoop-dist hadoop-minicluster hadoop-yarn-project
hadoop-assemblies hadoop-hdfs-project hadoop-project pom.xml
hadoop-client hadoop-mapreduce-project hadoop-project-dist
[root@db99 release-2.2.0]# mvn package -Pdist,native -DskipTests -Dtar (往后看不要急着编译,这是maven命令)
..............编译需要较长时间大概1个小时左右。
如果出现如下错误:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile (default-testCompile) on project hadoop-auth: Compilation failure: Compilation failure:
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[88,11] error: cannot access AbstractLifeCycle
[ERROR] class file for org.mortbay.component.AbstractLifeCycle not found
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[96,29] error: cannot access LifeCycle
[ERROR] class file for org.mortbay.component.LifeCycle not found
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[98,10] error: cannot find symbol
[ERROR] symbol: method start()
[ERROR] location: variable server of type Server
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[104,12] error: cannot find symbol
[ERROR] -> [Help 1]
需要修改源码下边的hadoop-common-project/hadoop-auth/pom.xml
vi ~/hadoop-common-project/hadoop-auth/pom.xml
[root@db99 release-2.2.0]# vim /data/release-2.2.0/hadoop-common-project/hadoop-auth/pom.xml
在第55行下添加:
56 <dependency>
57 <groupId>org.mortbay.jetty</groupId>
58 <artifactId>jetty-util</artifactId>
59 <scope>test</scope>
60 </dependency>
保存退出,重新编译即可。
最后编译成功:
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minicluster ---
[INFO] Building jar: /data/release-2.2.0/hadoop-minicluster/target/hadoop-minicluster-2.2.0-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [ 1.386 s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [ 1.350 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [ 2.732 s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [ 0.358 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 2.048 s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [ 3.450 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [ 16.114 s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 13.317 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [05:22 min]
[INFO] Apache Hadoop NFS ................................. SUCCESS [ 16.925 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [ 0.044 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [02:51 min]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 28.601 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [ 27.589 s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [ 3.966 s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.044 s]
[INFO] hadoop-yarn ....................................... SUCCESS [ 52.846 s]
[INFO] hadoop-yarn-api ................................... SUCCESS [ 41.700 s]
[INFO] hadoop-yarn-common ................................ SUCCESS [ 25.945 s]
[INFO] hadoop-yarn-server ................................ SUCCESS [ 0.105 s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [ 8.436 s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 15.659 s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [ 3.647 s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [ 12.495 s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [ 0.684 s]
[INFO] hadoop-yarn-client ................................ SUCCESS [ 5.266 s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [ 0.102 s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [ 2.666 s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [ 0.093 s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [ 20.092 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [ 2.783 s]
[INFO] hadoop-yarn-site .................................. SUCCESS [ 0.225 s]
[INFO] hadoop-yarn-project ............................... SUCCESS [ 36.636 s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [ 16.645 s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [ 3.058 s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [ 9.441 s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [ 5.482 s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [ 7.615 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [ 2.473 s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [ 6.183 s]
[INFO] hadoop-mapreduce .................................. SUCCESS [ 6.454 s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [ 4.802 s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 27.635 s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [ 2.850 s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [ 6.092 s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [ 4.742 s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [ 3.155 s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [ 3.317 s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [ 9.791 s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [ 2.680 s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [ 0.036 s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [ 20.765 s]
[INFO] Apache Hadoop Client .............................. SUCCESS [ 6.476 s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [ 0.215 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 16:32 min
[INFO] Finished at: 2014-07-18T01:18:24+08:00
[INFO] Final Memory: 117M/314M
[INFO] ------------------------------------------------------------------------
此时编译好的文件位于 ~/hadoop-dist/target/hadoop-2.2.0/ 目录中
拷贝hadoop-2.2.0到安装目录下,/usr/local/ 重新修改其配置文件,重新并格式化,启动,即可。
到此已经不会报错,可以使用。
[root@db96 hadoop]# hadoop dfs -put ./in
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
put: `.': No such file or directory
[root@db96 hadoop]# file /usr/local/hadoop/lib/native/libhadoop.so.1.0.0
/usr/local/hadoop/lib/native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
测试使用:上传一个文件,下载一个文件,查看上传文件的内容:
[root@db96 ~]# cat wwn.txt
# This is a text txt
# by coco
# 2014-07-18
[root@db96 ~]# hdfs dfs -mkdir /test
[root@db96 ~]# hdfs dfs -put wwn.txt /test
[root@db96 ~]# hdfs dfs -cat /test/wwn.txt
[root@db96 ~]# hdfs dfs -get /test/wwn.txt /tmp
[root@db96 hadoop]# hdfs dfs -rm /test/wwn.txt
[root@db96 tmp]# ll
总用量 6924
-rw-r--r-- 1 root root 70 7月 18 11:50 wwn.txt
[root@db96 ~]# hadoop dfs -ls /test
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Found 2 items
-rw-r--r-- 2 root supergroup 6970105 2014-07-18 11:44 /test/gc_comweight.txt
-rw-r--r-- 2 root supergroup 59 2014-07-18 14:56 /test/hello.txt
到此我们的hdfs文件系统已经能正常使用。
按照在64位CentOS上编译 Hadoop 2.2.0的步骤,进行对hadoop2.2在rhel6.2上进行编译,
cd hadoop-2.2.0-src
mvn package -DskipTests -Pdist,native -Dtar
大概10分钟左右时报错如下:
Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on project hadoop-project: Execution ……………………
使用以下命令从新编译:
mvn package -DskipTests -Pdist,native -Dtar -Dmaven.javadoc.skip=true
大概40分钟后,编译完成
INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-minicluster ---
[INFO] No sources in project. Archive not created.
[INFO]
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ hadoop-minicluster ---
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minicluster ---
[INFO] Skipping javadoc generation
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [ 2.978 s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [ 8.844 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [01:58 min]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [ 0.616 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 43.968 s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [ 46.198 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [16:14 min]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 11.632 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [04:25 min]
[INFO] Apache Hadoop NFS ................................. SUCCESS [ 54.758 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [ 0.055 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [02:11 min]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 11.402 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [01:19 min]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [ 3.266 s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.032 s]
[INFO] hadoop-yarn ....................................... SUCCESS [01:21 min]
[INFO] hadoop-yarn-api ................................... SUCCESS [ 43.147 s]
[INFO] hadoop-yarn-common ................................ SUCCESS [01:00 min]
[INFO] hadoop-yarn-server ................................ SUCCESS [ 0.084 s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [ 4.851 s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 42.176 s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [ 0.837 s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [ 5.910 s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [ 1.097 s]
[INFO] hadoop-yarn-client ................................ SUCCESS [ 1.100 s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [ 0.083 s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [ 1.529 s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [ 0.110 s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [ 7.229 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [ 0.806 s]
[INFO] hadoop-yarn-site .................................. SUCCESS [ 0.310 s]
[INFO] hadoop-yarn-project ............................... SUCCESS [ 18.240 s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [ 3.875 s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [ 0.911 s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [ 2.599 s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [ 1.372 s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [ 4.471 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [ 0.676 s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [ 1.054 s]
[INFO] hadoop-mapreduce .................................. SUCCESS [ 7.665 s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [ 1.322 s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 21.522 s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [ 0.677 s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [ 1.394 s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [ 2.027 s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [ 0.727 s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [ 1.062 s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [ 9.102 s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [ 4.286 s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [ 0.024 s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [ 8.841 s]
[INFO] Apache Hadoop Client .............................. SUCCESS [ 16.353 s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [ 0.212 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 35:15 min
[INFO] Finished at: 2014-11-25T22:43:39+08:00
[INFO] Final Memory: 95M/368M
[INFO] ------------------------------------------------------------------------
发表评论
-
java.sql.SQLException: 无效的列索引
2016-01-04 11:44 1533java.sql.SQLException: 无效的列索引 “ ... -
shell中的tput命令讲解
2015-12-04 15:05 2263什么是 tput? tput 命令将通过 terminfo ... -
hive建的表丢了?其实它一直在
2015-12-03 18:49 2291问题来了: 1.hive使用derby作为元数据库找达到所创建 ... -
linux的bc计算器
2015-11-23 13:06 1296bc 命令: bc 命令是用于命令行计算器。 它类似 ... -
pctfree和pctused
2015-11-19 11:19 757一、建立表时候,注意PCTFREE参数的作用 P ... -
物化视图
2015-11-19 10:52 443一、准备条件以及备注 假设双方数据库都是ORAC ... -
oracle的高水位问题处理方式
2015-11-19 10:13 4805最近遇到Oracle 表中数据量很大查询和更新比较慢 需要删除 ... -
set feedback
2015-11-19 09:27 6141。set feedback 有三种方式: set feedb ... -
hive 的CLI使用手册
2015-11-18 15:18 1722写的不够全,后面有时间再补吧<个人汉语水平有限,语言组织 ... -
面试经验总结
2015-11-06 21:53 639今天是面试的第一天,从西安来北京这么久,感觉今天的挫败感 ... -
R语言与hadoop之间的千万柔情
2015-11-06 21:02 810Hadoop的家族如此之强大 ... -
MapReduce 从作业、任务(task)、管理员角度调优
2015-10-14 00:53 970【摘自hyj博主】 Hadoop为 ... -
hadoop作业的优化常用手段
2015-10-13 23:38 779在mapreduce应用机制全部完成后,常面临一个常见问题“作 ... -
oracle中修改有数据的表的字段类型
2015-10-10 02:00 5862【修改时会涉及到数据类型转换,小心】 在修改列的长度时候,只能 ... -
初始化参数设置—processes与session
2015-10-10 01:18 1895【摘自jslfl的微博,感谢大神的总结,很有帮助所以收藏了】 ... -
ORACLE并行度
2015-10-10 00:50 1715在索引create 和rebuild的时候,在CPU 允许的情 ... -
Hadoop_Avro数据类型与模式
2015-10-10 00:01 10091.Avro基本数据类型 类型 描述 模式示例 nu ... -
hadoop_AVRO数据序列化系统_简介
2015-10-09 22:47 736声明()内容为个人理解,[]内容为注解 (1)Avro是一个 ...
相关推荐
Hadoop 编译 64 位本地库 方法 Hadoop 编译 64 位本地库 方法Hadoop 编译 64 位本地库 方法
hadoop编译过程详细说明,可按操作在Linux下将32位编译成64位
Centos6.8 32位 64位下编译 hadoop 2.6.4 源码
本人用7个多小时成功编译 hadoop 2.7.1 64位编译包(JDK1.8 64),由于文件太大,分3卷压缩。 hadoop 2.7.1 相对于2.7.0修复了上百个Bug,是可用于生产环境的版本了。
win 7 64上编译 Hadoop 2.7.3 源码 的真实经历。
用于生产环境的编译版本,CentOS6.5 Hadoop-2.2.0.64位的编译
hadoop2源文件编译为64位所依赖包的tar包
hadoop编译的64位安装包
hadoop编译后的包,支持snappy压缩,官网不支持,可以用
hadoop-2.7.1-64位编译包,本人亲测,绝对有用。
hadoop 2.7 2.X 64位编译版本 亲测可用 hadoop-2.7.5-64x.tar.gz
在64位系统上运行Hadoop 2.6.0会...这是因为hadoop默认编译好的native库是32bit的,所以要重新编译hadoop2.6的源代码,获取64位native库,将其复制到原来安装的hadoop的native目录中替代原来的类库,然后重新运行即可。
hadoop 2.6.4 windows 10 64位编译 的bin文件夹,包括hadoop.dll和winutils
Hadoop 2.7.4 基于Windows 64位平台编译的bin包,可用于在windows平台下搭建spark环境或其它用途。
Hadoop官方不提供64位编译版,在此提供编译结果分享给大家 编译环境:Ubuntu14.04 测试环境:Ubuntu14.04 使用说明: 1.完整下载3个文件包:(上传大小限制,抱歉) hadoop-2.4.1-amd64.z01 hadoop-2.4.1-amd64.z02 ...
hadoop cdh5 centos 64位系统本地库编译文件 不会替换可以找我帮忙替换
Hadoop官方不提供64位编译版,在此提供编译结果分享给大家 编译环境:Ubuntu14.04 测试环境:Ubuntu14.04 使用说明: 1.完整下载3个文件包:(上传大小限制,抱歉) hadoop-2.4.1-amd64.z01 hadoop-2.4.1-amd64.z02 ...
hadoop 2.6源代码的Windows64位编译版本。
上传大小限制了,所以拆分 每个1分,评论返分,支持作者下载更好的资源来学习和...hadoop-2.7.0.tar.gz分卷1 分卷1:hadoop-2.7.0.z01 分卷2:hadoop-2.7.0.z02 分卷3:hadoop-2.7.0.zip 下载完毕后选中分卷1,解压即可
hadoop cdh 编译所需下载的所有软件,很全的