-
hadoop+zookeeper+hbase配置30
在 datanode 怎么会报这个错误??regionserver.HRegionServer: Attempting connect to Master server at localhost.localdomain:60000
在master不断的出现INFO master.ServerManager: Waiting on regionserver(s) to checkin
配置是按照http://yiihsia.iteye.com/blog/1039426
来的,好郁闷,谢谢
版本hadoop-0.20.2 hbase-0.90.4 zookeeper-3.3.3
问题补充:乱答一通.....2011年10月10日 21:36
4个答案 按时间排序 按投票排序
-
环境准备 1.在windows下安装VMware 2.创建了3个fedora14 linux。地址分别为: m201 192.168.0.201 (Namenode) s202 192.168.0.202 (Datanode) s203 192.168.0.203 (Datanode) 3.在linux系统中下载所需要的软件。分别为: jdk-6u23-linux-i586-rpm.bin hadoop-0.20.2.tar.gz zookeeper-3.3.3.tar.gz hbase-0.90.2.tar.gz 将下载的软件保存到/root/install目录下。 安装jdk(s202,s203进行同样的操作) 1.执行jdk-6u23-linux-i586-rpm.bin就行可以。jdk将安装在/usr/java/jdk1.6.0_23目录下。 2.设置java环境变量,修改/etc/profile文件。在文件最后增加: export JAVA_HOME=/usr/java/jdk1.6.0_23/ export JRE_HOME=/usr/java/jdk1.6.0_23/jre/ export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH export PATH=$JAVA_HOME/bin:$PATH 3.使/etc/profile文件生效,执行这个文件。 设置ssh(使m201,可以不用密码访问s202和s203) 官网上的一段话: Now check that you can ssh to the localhost without a passphrase: $ ssh localhost If you cannot ssh to localhost without a passphrase, execute the following commands: $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 将m201上的id_dsa.pub 文件追加到s202和s203的authorized_keys文件内 安装hadoop 1.到/root/install目录解压hadoop-0.20.2.tar.gz,执行命令:tar -zxvf hadoop-0.20.2.tar.gz。运行结束后将生成hadoop-0.20.2目录 2。进入/root/install/hadoop-0.20.2/conf目录 3.修改文件masters(定义masters IP) 192.168.0.201 4.修改文件slaves(定义slaves IP) 192.168.0.202 192.168.0.203 5.修改文件hadoop-env.sh(设置jdk路径) export JAVA_HOME=/usr/java/jdk1.6.0_23 6.修改文件core-site.xml在<configuration>中加入 <property> <name>hadoop.tmp.dir</name> <value>/hadoopdata</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://m201:9000</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> 7.修改文件hdfs-site.xml在<configuration>中加入 <property> <name>dfs.replication</name> <value>1</value> </property> 8.修改文件mapred-site.xml在<configuration>中加入 <property> <name>mapred.job.tracker</name> <value>m201:9001</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> 9.设置环境变量,修改文件/etc/profile export HADOOP_HOME=/root/install/hadoop-0.20.2 export PATH=$HADOOP_HOME/bin:$PATH s202,s203,也执行一样的操作 执行/etc/profile使其生效 10.配/etc/hosts文件,加入 192.168.0.201 m201 192.168.0.202 s202 192.168.0.203 s203 s202,s203,也执行一样的操作 11.将/root/install/hadoop-0.20.2目录复制到s202,s203上 可使用scp -r 源 主机:目标 11.格式化HDFS文件系统 /root/install/hadoop-0.20.2/bin/hadoop namenode –format命令 12. 执行/root/install/hadoop-0.20.2/bin/start-all.sh文件,启服务 /root/install/hadoop-0.20.2/bin/stop-all.sh文件,停止服务 hadoop安装完成 可运行 http://192.168.0.201:50070/dfshealth.jsp 查看hadoop是否运行 安装zookeeper(在m201上执行) 1.在/root/install/hadoop-0.20.2/中创建目录zookeeper cd /root/install/hadoop-0.20.2 mkdir zookeeper 2.在/root/install目录中解压zookeeper cd /root/install tar -zxvf zookeeper-3.3.3.tar.gz 3.将zookeeper移动至/root/install/hadoop-0.20.2/zookeeper目录 cd /root/install/zookeeper mv * /root/install/hadoop-0.20.2/zookeeper 3配置zookeeper 1).创建zoo.cfg文件 cd /root/install/hadoop-0.20.2/zookeeper/conf cp zoo_sample.cfg zoo.cfg 2).修改zoo.cfg文件,zoo.cfg文件的完整内容如下: # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. dataDir=/root/install/hadoop-0.20.2/zookeeper/zookeeper-data #(新增加) dataLogDir=/root/install/hadoop-0.20.2/zookeeper/logs #(新增加) # the port at which the clients will connect clientPort=2181 server.1=m201:2888:3888 #(新增加) server.2=s202:2888:3888 #(新增加) server.3=s203:2888:3888 #(新增加) 在文件中写入 #(新增加)的项目 3).创建zookeeper-data目录 cd /root/install/hadoop-0.20.2/zookeeper/ mkdir zookeeper-data 3).创建myid文件 cd /root/install/hadoop-0.20.2/zookeeper/zookeeper-data vi myid myid文件中的内空写:1 :x保存文件 4.将/root/install/hadoop-0.20.2/zookeeper目录复制到s202,s203上 可使用scp -r 源 主机:目标 5.进入s202主机,写myid文件内容修改为:2 6.进入s203主机,写myid文件内容修改为:3 7.启动zookeeper(m201,s202,s203,执行同样的操作) /root/install/hadoop-0.20.2/zookeeper/bin/zkServer.sh start /root/install/hadoop-0.20.2/zookeeper/bin/zkServer.sh stop(为停止) 安装hbase(m201中操作) 1.在/root/install/hadoop-0.20.2/中创建目录hbase cd /root/install/hadoop-0.20.2 mkdir hbase 2.在/root/install目录中解压hbase cd /root/install tar -zxvf hbase-0.90.2.tar.gz 3.将hbase移动至/root/install/hadoop-0.20.2/hbase目录 cd /root/install/hbase-0.90.2 mv * /root/install/hadoop-0.20.2/hbase 4.配置hbase 1).配置/etc/profile文件,加入 export HBASE_HOME=/root/install/hadoop-0.20.2/hbase export PATH=$PATH:$HBASE_HOME/bin s202,s203,也执行一样的操作 执行/etc/profile使其生效 2).修改hbase-site.xml文件 cd /root/install/hadoop-0.20.2/hbase/conf vi hbase-site.xml 在<configuration>中加入 : <property> <name>hbase.rootdir</name> <value>hdfs://m201:9000/hasexx</value> <description>The directory shared by region servers.</description> </property> <property> <name>hbase.master.port</name> <value>60000</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> <description>The mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh) </description> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/root/install/hadoop-0.20.2/zookeeper</value> <description>Property from ZooKeeper's config zoo.cfg. The directory where the snapshot is stored. </description> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> <description>Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect. </description> </property> <property> <name>hbase.zookeeper.quorum</name> <value>m201,s202,s203</value> <description>Comma separated list of servers in the ZooKeeper Quorum. For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By default this is set to localhost for local and pseudo-distributed modes of operation. For a fully-distributed setup, this should be set to a full list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which we will start/stop ZooKeeper on. </description> </property> 3).修改hbase-env.sh文件,加入 export JAVA_HOME=/usr/java/jdk1.6.0_23/ export HBASE_CLASSPATH=/root/install/hadoop-0.20.2/conf export HBASE_MANAGES_ZK=false 4).复制zookeeper的zoo.cfg文件到/root/install/hadoop-0.20.2/conf目录中 cp /root/install/hadoop-0.20.2/zookeeper/conf/zoo.cfg /root/install/hadoop-0.20.2/conf/ 5).修改regionservers文件,完整内容为: 192.168.0.202 192.168.0.203 6).将hadoop的hadoop-0.20.2-core.jar文复制到hbase的lib目录下,删除原来的hadoop-core-0.20-append-r1056497.jar文件 7).将/root/install/hadoop-0.20.2/hbase目录复制到s202,s203上 可使用scp -r 源主机:目标 5.启动服务 /root/install/hadoop-0.20.2/hbase/bin/start-hbase.sh /root/install/hadoop-0.20.2/hbase/bin/stop-hbase.sh停止
把之前的去掉,在来一边,要卸载干净............都则不会成功,刚才没看清题目,不好意思!!!!!!!!!和oracle一个样,要卸载干净,环境弄对!!!!!!!2011年10月10日 21:47
-
少jar,连接不到数据库,
jdbc 的 package org.seven.utils; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; public class DB { public static String driver = "org.gjt.mm.mysql.Driver"; //public static String url ="jdbc:mysql://localhost:3306/demo?useUnicode=true&characterEncoding=UTF-8"; public static String url = "jdbc:mysql://localhost:3306/demo"; public static String username = "root"; public static String password = "wj"; public static Connection getConnection(){ Connection conn = null; try { Class.forName(driver); try { conn = DriverManager.getConnection(url,username,password); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } return conn; } public static void release(Connection conn,Statement stat,ResultSet rs){ try{ if(conn!=null){ conn.close(); conn = null; } if(stat!=null){ stat.close(); stat = null; } if(rs!=null){ rs.close(); rs = null; } }catch(Exception e){ e.printStackTrace(); } } }
ssh的
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <!-- 连接池 --> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName"> <value>com.mysql.jdbc.Driver</value> </property> <property name="url"> <value>jdbc:mysql://localhost:3306/s2sh</value> </property> <property name="username"> <value>root</value> </property> <property name="password"> <value>wj</value> </property> </bean> <!-- sessionFactory --> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref local="dataSource"/> </property> <property name="mappingResources"> <list> <value>com/s2sh/mobel/User.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect"> org.hibernate.dialect.MySQLDialect </prop> <prop key="hibernate.show_sql"> true </prop> </props> </property>
2011年10月10日 21:44
相关推荐
为搭建Hadoop2.2+Zookeeper3.4.5+HBase0.96集群环境,需要至少3台Linux机器,建议使用Centos6.4 64位操作系统,每台机器建议配置不低于4G内存和10G磁盘空间。 软件方面,需要安装jdk-7u55-linux-x64.rpm、apache-...
徐老师大数据培训Hadoop+HBase+ZooKeeper+Spark+Kafka+Scala+Ambari
数据仓库hadoop+zookeeper+hbase集群安装方法记录,自己搭建纯手写的记录。相关软件请自行下载
Hadoop+ZooKeeper+HBase+hive(HQL)安装步骤
hadoop集群配置流程以及用到的配置文件,hadoop2.8.4、hbase2.1.0、zookeeper3.4.12
jdk1.8.0_131、apache-zookeeper-3.8.0、hadoop-3.3.2、hbase-2.4.12 mysql5.7.38、mysql jdbc驱动mysql-connector-java-8.0.8-dmr-bin.jar、 apache-hive-3.1.3 2.本文软件均安装在自建的目录/export/server/下 ...
Hadoop HA高可用集群搭建(Hadoop+Zookeeper+HBase) 一、Hadoop HA高可用集群...通过规划服务器环境、选择合适的版本组合、安装和配置Zookeeper、Hadoop和HBase三个组件,可以搭建一个高效的Hadoop HA高可用集群。
Hadoop+Zookeeper+Hbase安装配置使用.pdf
1、内容概要:Hadoop+Spark+Hive+HBase+Oozie+Kafka+Flume+Flink+Elasticsearch+Redash等大数据集群及组件搭建指南(详细搭建步骤+实践过程问题总结)。 2、适合人群:大数据运维、大数据相关技术及组件初学者。 3、...
Hadoop+Zookeeper+Hbase+Hive部署
Docker(Hadoop_3.3.1+HBase_2.4.16+Zookeeper_3.7.1+Hive_3.1.3 )配置文件 搭建集群环境
大数据 hadoop spark hbase ambari全套视频教程(购买的付费视频)
hadoop2.6.0+HA+Zookeeper3.4.6+hbase1.0.0安装配置步骤详细文档,包括各种xml配置文件
从零开始hadoop+zookeeper+hbase+hive集群安装搭建,内附详细配置、测试、常见error等图文,按照文档一步一步搭建肯定能成功。(最好用有道云打开笔记)
通过VirtualBox安装多台虚拟机,实现集群环境搭建。 优势:一台电脑即可。 应用场景:测试,学习。...内附百度网盘下载地址,有hadoop+zookeeper+spark+kafka等等·····需要的安装包和配置文件
Hadoop+Zookeeper+HBase部署指南
Hadoop+Zookeeper+HBase环境搭建,详细步骤和实例,从零开始搭建Hadoop集群
Hadoop2.6+HA+Zookeeper3.4.6+Hbase1.0.0 集群安装详细步骤
Hadoop+Hive+Mysql+Zookeeper+Hbase+Sqoop详细安装手册
Hadoop+Zookeeper+HBase部署指南