`

downgrade hbase from 0.94.16 to 0.94.2

 
阅读更多

we have a cluster about 25 nodes,which are installed hbase-0.94.2,yeah it is find for over second weeks.but last two weeks i upgaded to 0.94.16,there are some bad network showed in ganglia.as below:



 

in the middle while,some strange things are occured:

2014-03-07 00:01:59,782 DEBUG [IPC Server handler 8 on 60000] FSTableDescriptors.java:169 Exception during readTableDecriptor. Current table name = .archive
org.apache.hadoop.hbase.TableInfoMissingException: No .tableinfo file under hdfs://hd03:54310/hbase/.archive
	at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptor(FSTableDescriptors.java:411)

 

after review the hbase's source and hbase-default.xml,i got it:

first ,this is ONLY a tip that it 's 'DEBUG' level in log to tell you this is not important ,

second,what is it?

 

ok,in hbase 0.94.16,it will create '.archive','.tmp' dirs under hbase.root dir,the former is used for archiving the hfiles,you can see in 

  <property>
    <name>hbase.table.archive.directory</name>
    <value>.archive</value>
    <description>Per-table directory name under which to backup files for a
      table. Files are moved to the same directories as they would be under the
      table directory, but instead are just one level lower (under
      table/.archive/... rather than table/...). Currently only applies to HFiles.</description>
  </property>

 but i want to tell you,this property is not used in code.actually ,look at the HMaster#startServicethreads(),

   //start the hfile archive cleaner thread
    Path archiveDir = HFileArchiveUtil.getArchivePath(conf);
    this.hfileCleaner = new HFileCleaner(cleanerInterval, this, conf, getMasterFileSystem()
        .getFileSystem(), archiveDir);
    Threads.setDaemonThreadRunning(hfileCleaner.getThread(), n + ".archivedHFileCleaner");

 and the archiveDir is just the '.archive' dir in hdfs.

go to HFileArchiver.java,you will get more clear about it 's details:

  /**
   * Resolve any conflict with an existing archive file via timestamp-append
   * renaming of the existing file and then archive the passed in files.
   * @param fs {@link FileSystem} on which to archive the files
   * @param baseArchiveDir base archive directory to store the files. If any of
   *          the files to archive are directories, will append the name of the
   *          directory to the base archive directory name, creating a parallel
   *          structure.
   * @param toArchive files/directories that need to be archvied
   * @param start time the archiving started - used for resolving archive
   *          conflicts.
   * @return the list of failed to archive files.
   * @throws IOException if an unexpected file operation exception occured
   */
  private static List<File> resolveAndArchive(FileSystem fs, Path baseArchiveDir,
      Collection<File> toArchive, long start) throws IOException {
    // short circuit if no files to move
    if (toArchive.size() == 0) return Collections.emptyList();

    LOG.debug("moving files to the archive directory: " + baseArchiveDir);

    // make sure the archive directory exists
    if (!fs.exists(baseArchiveDir)) {
      if (!HBaseFileSystem.makeDirOnFileSystem(fs, baseArchiveDir)) {
        throw new IOException("Failed to create the archive directory:" + baseArchiveDir
            + ", quitting archive attempt.");
      }
      LOG.debug("Created archive directory:" + baseArchiveDir);
    }

    List<File> failures = new ArrayList<File>();
    String startTime = Long.toString(start);
    for (File file : toArchive) {
      // if its a file archive it
      try {
        LOG.debug("Archiving:" + file);
        if (file.isFile()) {
          // attempt to archive the file
          if (!resolveAndArchiveFile(baseArchiveDir, file, startTime)) {
            LOG.warn("Couldn't archive " + file + " into backup directory: " + baseArchiveDir);
            failures.add(file);
          }
        } else {
          // otherwise its a directory and we need to archive all files
          LOG.debug(file + " is a directory, archiving children files");
          // so we add the directory name to the one base archive
          Path parentArchiveDir = new Path(baseArchiveDir, file.getName());
          // and then get all the files from that directory and attempt to
          // archive those too
          Collection<File> children = file.getChildren();
          failures.addAll(resolveAndArchive(fs, parentArchiveDir, children, start));
        }
      } catch (IOException e) {
        LOG.warn("Failed to archive file: " + file, e);
        failures.add(file);
      }
    }
    return failures;
  }

 and the class function description:

/**
 * Utility class to handle the removal of HFiles (or the respective {@link StoreFile StoreFiles})
 * for a HRegion from the {@link FileSystem}. The hfiles will be archived or deleted, depending on
 * the state of the system.
 */
public class HFileArchiver {

 

but in 94.2 ,hbase doesn't know this a other verions's one undr hbase.root.so it complained these.

after remove it by manually,the cluster is ruuning fine!

so if you dont necessary to fix some critial bugs ,not to do useless upgrade.

 

here is a perf-evaluation at both these versions:

hbase PerformanceEvaluation benchmark - 0.94.2 VS 0.94.16 VS 0.96

 

 

  • 大小: 847.9 KB
0
0
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics