- 浏览: 275603 次
- 性别:
- 来自: 广州
文章分类
- 全部博客 (247)
- free talking (11)
- java (18)
- search (16)
- hbase (34)
- open-sources (0)
- architect (1)
- zookeeper (16)
- vm (1)
- hadoop (34)
- nutch (33)
- lucene (5)
- ubuntu/shell (8)
- ant (0)
- mapreduce (5)
- hdfs (2)
- hadoop sources reading (13)
- AI (0)
- distributed tech (1)
- others (1)
- maths (6)
- english (1)
- art & entertainment (1)
- nosql (1)
- algorithms (8)
- hadoop-2.5 (16)
- hbase-0.94.2 source (28)
- zookeeper-3.4.3 source reading (1)
- solr (1)
- TODO (3)
- JVM optimization (1)
- architecture (0)
- hbase-guideline (1)
- data mining (3)
- hive (1)
- mahout (0)
- spark (28)
- scala (3)
- python (0)
- machine learning (1)
最新评论
-
jpsb:
...
为什么需要分布式? -
leibnitz:
hi guy, this is used as develo ...
compile hadoop-2.5.x on OS X(macbook) -
string2020:
撸主真土豪,在苹果里面玩大数据.
compile hadoop-2.5.x on OS X(macbook) -
youngliu_liu:
怎样运行这个脚本啊??大牛,我刚进入搜索引擎行业,希望你能不吝 ...
nutch 数据增量更新 -
leibnitz:
also, there is a similar bug ...
2。hbase CRUD--Lease in hbase
vvvvvvvvvvvv config vvvvvvvvvvvvvvv
安装jdk:
sudo -s ./jdk.bin
set environments:
/etc/profile #global
~/.profile #personalize
#optional
sudo addgroup hadoopgrp
sudo adduser --ingroup hadoopgrp hadoop #psw:hadoop
#switch,and let the hadoop user machine have a confidence to the requester(localhost)
#THIS STEP IS TO LET THE MASTER TO SSH TO SLAVE WITHOUT PASSWORD,SO IT IS NEEDNESS TO COPY IT'S RSA INFO TO SLAVES ;DO IT OPPOSITELY IF SLAVE ssh TO MASTER ALSO.
su - hadoop
ssh-keygen -t rsa -P ""
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
#optional:add use privilege for modify files in /usr/local/hadoop-*.*/*.*
sudo chown -R hadoop:hadoopgrp hadoop-0.20.2
## test connect by no password
ssh localhost #if show can not open port 22,this maybe no installed sshd,use this to install:sudo apt-get install openssh-server
#modify config files
#core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
#mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
#hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
hadoop namenode -format
bin/start-all.sh
#test run info
jps
sudo netstat -plten | grep java
#demo run command:
hadoop jar hadoop-0.20.2-examples.jar wordcount input/data-file output/wc
bin/stop-all.sh
^^^^^^^^^ config ^^^^^^^^^^^^
output:
input:3 files(use 32s)
hadoop@leibnitz-laptop:~/hadoop/hadoop-0.20.2$ hadoop jar hadoop-0.20.2-examples.jar wordcount input output/wordcount
11/02/24 23:45:58 INFO input.FileInputFormat: Total input paths to process : 3
11/02/24 23:45:58 INFO mapred.JobClient: Running job: job_201102242334_0001
11/02/24 23:45:59 INFO mapred.JobClient: map 0% reduce 0%
11/02/24 23:46:13 INFO mapred.JobClient: map 66% reduce 0%
11/02/24 23:46:19 INFO mapred.JobClient: map 100% reduce 0%
11/02/24 23:46:22 INFO mapred.JobClient: map 100% reduce 33%
11/02/24 23:46:28 INFO mapred.JobClient: map 100% reduce 100%
11/02/24 23:46:30 INFO mapred.JobClient: Job complete: job_201102242334_0001
11/02/24 23:46:30 INFO mapred.JobClient: Counters: 17
11/02/24 23:46:30 INFO mapred.JobClient: Job Counters
11/02/24 23:46:30 INFO mapred.JobClient: Launched reduce tasks=1
11/02/24 23:46:30 INFO mapred.JobClient: Launched map tasks=3
11/02/24 23:46:30 INFO mapred.JobClient: Data-local map tasks=3
11/02/24 23:46:30 INFO mapred.JobClient: FileSystemCounters
11/02/24 23:46:30 INFO mapred.JobClient: FILE_BYTES_READ=2214725
11/02/24 23:46:30 INFO mapred.JobClient: HDFS_BYTES_READ=3671479
11/02/24 23:46:30 INFO mapred.JobClient: FILE_BYTES_WRITTEN=3689100
11/02/24 23:46:30 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=880802
11/02/24 23:46:30 INFO mapred.JobClient: Map-Reduce Framework
11/02/24 23:46:30 INFO mapred.JobClient: Reduce input groups=82331
11/02/24 23:46:30 INFO mapred.JobClient: Combine output records=102317
11/02/24 23:46:30 INFO mapred.JobClient: Map input records=77931
11/02/24 23:46:30 INFO mapred.JobClient: Reduce shuffle bytes=1474279
11/02/24 23:46:30 INFO mapred.JobClient: Reduce output records=82331
11/02/24 23:46:30 INFO mapred.JobClient: Spilled Records=255947
11/02/24 23:46:30 INFO mapred.JobClient: Map output bytes=6076039
11/02/24 23:46:30 INFO mapred.JobClient: Combine input records=629167
11/02/24 23:46:30 INFO mapred.JobClient: Map output records=629167
11/02/24 23:46:30 INFO mapred.JobClient: Reduce input records=102317
发表评论
-
hadoop-replication written flow
2017-08-14 17:00 513w:net write r :net read( ... -
hbase-export table to json file
2015-12-25 17:21 1643i wanna export a table to j ... -
yarn-similar logs when starting up container
2015-12-09 17:17 91815/12/09 16:47:52 INFO yarn.E ... -
hadoop-compression
2015-10-26 16:52 461http://blog.cloudera.com/blog ... -
hoya--hbase on yarn
2015-04-23 17:00 406Introducing Hoya – HBase on YA ... -
compile hadoop-2.5.x on OS X(macbook)
2014-10-30 15:42 2448same as compile hbase ,it ' ... -
upgrades of hadoop and hbase
2014-10-28 11:39 7131.the match relationships ... -
how to submit jars to a map reduce job?
2014-04-02 01:23 515there maybe two ways : 1.serv ... -
install snappy compression in hadoop and hbase
2014-03-08 00:36 4301.what is snappy ... -
3。hbase rpc/ipc/proxy通信机制
2013-07-15 15:12 1259一。RPC VS IPC (relationship/di ... -
hadoop-2 dfs/yarn 相关概念
2012-10-03 00:22 1875一.dfs 1.旧的dfs方案 可以看到bloc ... -
hadoop 删除节点(Decommission nodes)
2012-09-02 03:28 2649具体的操作步骤网上已经很多,这里只说明一下自己操作过程注意事项 ... -
hadoop 2(0.23.x) 与 0.20.x比较
2012-07-01 12:09 2178以下大部分内容来自网络,这里主要是进行学习,比较 1、 ... -
hadoop-2.0 alpha standalone install
2012-06-10 12:02 2467看了一堆不太相关的东西... 其实只要解压运行即可,形 ... -
hadoop源码阅读-shell启动流程-start-all
2012-05-06 01:13 843when executes start-all.sh ... -
hadoop源码阅读-shell启动流程
2012-05-03 01:58 1851open the bin/hadoop file,you w ... -
hadoop源码阅读-第二回阅读开始
2012-05-03 01:03 1004出于工作需要及版本更新带来的变动,现在开始再次进入源码 ... -
hadoop 联合 join操作
2012-01-02 18:06 983hadoop join操作类似于sql中的功能,就是对多表进行 ... -
hadoop几种排序简介
2011-12-16 21:52 1580在map reduce框架中,除了常用的分布式计算外,排序也算 ... -
nutch搜索架构关键类
2011-12-13 00:19 14todo
相关推荐
Hadoop在centOS系统下的安装文档,系统是虚拟机上做出来的,一个namenode,两个datanode,详细讲解了安装过程。
hadoop-annotations-3.1.1.jar hadoop-common-3.1.1.jar hadoop-mapreduce-client-core-3.1.1.jar hadoop-yarn-api-3.1.1.jar hadoop-auth-3.1.1.jar hadoop-hdfs-3.1.1.jar hadoop-mapreduce-client-hs-3.1.1.jar ...
赠送jar包:hbase-hadoop2-compat-1.2.12.jar; 赠送原API文档:hbase-hadoop2-compat-1.2.12-javadoc.jar; 赠送源代码:hbase-hadoop2-compat-1.2.12-sources.jar; 赠送Maven依赖信息文件:hbase-hadoop2-compat-...
Hadoop分布式环境搭建教程一
赠送jar包:hbase-hadoop2-compat-1.1.3.jar; 赠送原API文档:hbase-hadoop2-compat-1.1.3-javadoc.jar; 赠送源代码:hbase-hadoop2-compat-1.1.3-sources.jar; 赠送Maven依赖信息文件:hbase-hadoop2-compat-...
hadoop2.6-common-bin 解决在Windows上操作hadoop出现 Could not locate executable问题
flink-1.0.3-bin-hadoop27-scala_2flink-1.0.3-bin-hadoop27-scala_2
hadoop3.3.0-winutils所有bin文件,亲测有效
必须注意对于不同的hadoop版本,` HADDOP_INSTALL_PATH/share/hadoop/common/lib`下的jar包版本都不同,需要一个个调整 - `hadoop2x-eclipse-plugin-master/ivy/library.properties` - `hadoop2x-eclipse-plugin-...
赠送jar包:hadoop-mapreduce-client-jobclient-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-jobclient-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-jobclient-2.6.5-sources.jar; 赠送...
hadoop-eclipse-plugin-2.7.3和2.7.7的jar包 hadoop-eclipse-plugin-2.7.3和2.7.7的jar包 hadoop-eclipse-plugin-2.7.3和2.7.7的jar包 hadoop-eclipse-plugin-2.7.3和2.7.7的jar包
赠送jar包:hadoop-yarn-client-2.6.5.jar; 赠送原API文档:hadoop-yarn-client-2.6.5-javadoc.jar; 赠送源代码:hadoop-yarn-client-2.6.5-sources.jar; 赠送Maven依赖信息文件:hadoop-yarn-client-2.6.5.pom;...
spark-3.2.4-bin-hadoop3.2-scala2.13 安装包
spark-1.6.3-bin-hadoop2.4-without-hive.tgz 经测试,hadoop 2.8.2下可用。hive2.1.1 可用
flink-1.7.2-bin-hadoop27-scala_2.11.tgz
1.安装 Hadoop-gpl-compression 1.1 wget http://hadoop-gpl-compression.apache-extras.org.codespot.com/files/hadoop-gpl-compression-0.1.0-rc0.tar.gz 1.2 mv hadoop-gpl-compression-0.1.0/lib/native/Linux-...
Hadoop Real-World Solutions Cookbook source code 源代码
hadoop-eclipse-plugin-2.7.4.jar和hadoop-eclipse-plugin-2.7.3.jar还有hadoop-eclipse-plugin-2.6.0.jar的插件都在这打包了,都可以用。
赠送jar包:hadoop-yarn-common-2.6.5.jar 赠送原API文档:hadoop-yarn-common-2.6.5-javadoc.jar 赠送源代码:hadoop-yarn-common-2.6.5-sources.jar 包含翻译后的API文档:hadoop-yarn-common-2.6.5-javadoc-...
hadoop-eclipse-plugin-3.1.1, hadoop eclipse 插件 3.1.1