`
sunwinner
  • 浏览: 197966 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

Clsssic MapReduce (MapReduce 1) - Task execution

 
阅读更多
  • First, tasktracker localizes the job jar by copying it from the shared filesystem to the its filesystem. It also copies any files needed from the distributed cache by the application to the local disk.
  // In TaskTracker.java
  /**
   * Localize the job on this tasktracker. Specifically
   * <ul>
   * <li>Cleanup and create job directories on all disks</li>
   * <li>Download the credentials file</li>
   * <li>Download the job config file job.xml from the FS</li>
   * <li>Invokes the {@link TaskController} to do the rest of the job 
   * initialization</li>
   * </ul>
   *
   * @param t task whose job has to be localized on this TT
   * @param rjob the {@link RunningJob}
   * @param ttAddr the tasktracker's RPC address
   * @return the path to the job configuration to be used for all the tasks
   *         of this job as a starting point.
   * @throws IOException
   */
  Path initializeJob(final Task t, final RunningJob rjob, 
      final InetSocketAddress ttAddr)
  throws IOException, InterruptedException {
    final JobID jobId = t.getJobID();

    final Path jobFile = new Path(t.getJobFile());
    final String userName = t.getUser();
    final Configuration conf = getJobConf();

    // save local copy of JobToken file
    final String localJobTokenFile = localizeJobTokenFile(t.getUser(), jobId);
    synchronized (rjob) {
      rjob.ugi = UserGroupInformation.createRemoteUser(t.getUser());

      Credentials ts = TokenCache.loadTokens(localJobTokenFile, conf);
      Token<JobTokenIdentifier> jt = TokenCache.getJobToken(ts);
      if (jt != null) { //could be null in the case of some unit tests
        getJobTokenSecretManager().addTokenForJob(jobId.toString(), jt);
      }
      for (Token<? extends TokenIdentifier> token : ts.getAllTokens()) {
        rjob.ugi.addToken(token);
      }
    }

    FileSystem userFs = getFS(jobFile, jobId, conf);

    // Download the job.xml for this job from the system FS
    final Path localJobFile =
      localizeJobConfFile(new Path(t.getJobFile()), userName, userFs, jobId);

    /**
      * Now initialize the job via task-controller to do the rest of the
      * job-init. Do this within a doAs since the public distributed cache 
      * is also set up here.
      * To support potential authenticated HDFS accesses, we need the tokens
      */
    rjob.ugi.doAs(new PrivilegedExceptionAction<Object>() {
      public Object run() throws IOException, InterruptedException {
        try {
          final JobConf localJobConf = new JobConf(localJobFile);
          // Setup the public distributed cache
          TaskDistributedCacheManager taskDistributedCacheManager =
            getTrackerDistributedCacheManager()
           .newTaskDistributedCacheManager(jobId, localJobConf);
          rjob.distCacheMgr = taskDistributedCacheManager;
          taskDistributedCacheManager.setupCache(localJobConf,
            TaskTracker.getPublicDistributedCacheDir(),
            TaskTracker.getPrivateDistributedCacheDir(userName));

          // Set some config values
          localJobConf.set(JobConf.MAPRED_LOCAL_DIR_PROPERTY,
              getJobConf().get(JobConf.MAPRED_LOCAL_DIR_PROPERTY));
          if (conf.get("slave.host.name") != null) {
            localJobConf.set("slave.host.name", conf.get("slave.host.name"));
          }
          resetNumTasksPerJvm(localJobConf);
          localJobConf.setUser(t.getUser());

          // write back the config (this config will have the updates that the
          // distributed cache manager makes as well)
          JobLocalizer.writeLocalJobFile(localJobFile, localJobConf);
          taskController.initializeJob(t.getUser(), jobId.toString(), 
              new Path(localJobTokenFile), localJobFile, TaskTracker.this,
              ttAddr);
        } catch (IOException e) {
          LOG.warn("Exception while localization " + 
              StringUtils.stringifyException(e));
          throw e;
        } catch (InterruptedException ie) {
          LOG.warn("Exception while localization " + 
              StringUtils.stringifyException(ie));
          throw ie;
        }
        return null;
      }
    });
    //search for the conf that the initializeJob created
    //need to look up certain configs from this conf, like
    //the distributed cache, profiling, etc. ones
    Path initializedConf = lDirAlloc.getLocalPathToRead(getLocalJobConfFile(
           userName, jobId.toString()), getJobConf());
    return initializedConf;
  }
  •  Second, It create a local working directory for the task then un-jar the contents of jars.
  // In DefaultTaskController.java
  /**
   * This routine initializes the local file system for running a job.
   * Details:
   * <ul>
   * <li>Copies the credentials file from the TaskTracker's private space to
   * the job's private space </li>
   * <li>Creates the job work directory and set 
   * {@link TaskTracker#JOB_LOCAL_DIR} in the configuration</li>
   * <li>Downloads the job.jar, unjars it, and updates the configuration to 
   * reflect the localized path of the job.jar</li>
   * <li>Creates a base JobConf in the job's private space</li>
   * <li>Sets up the distributed cache</li>
   * <li>Sets up the user logs directory for the job</li>
   * </ul>
   * This method must be invoked in the access control context of the job owner 
   * user. This is because the distributed cache is also setup here and the 
   * access to the hdfs files requires authentication tokens in case where 
   * security is enabled.
   * @param user the user in question (the job owner)
   * @param jobid the ID of the job in question
   * @param credentials the path to the credentials file that the TaskTracker
   * downloaded
   * @param jobConf the path to the job configuration file that the TaskTracker
   * downloaded
   * @param taskTracker the connection to the task tracker
   * @throws IOException
   * @throws InterruptedException
   */
  @Override
  public void initializeJob(String user, String jobid, 
                            Path credentials, Path jobConf, 
                            TaskUmbilicalProtocol taskTracker,
                            InetSocketAddress ttAddr
                            ) throws IOException, InterruptedException {
    final LocalDirAllocator lDirAlloc = allocator;
    FileSystem localFs = FileSystem.getLocal(getConf());
    JobLocalizer localizer = new JobLocalizer((JobConf)getConf(), user, jobid);
    localizer.createLocalDirs();
    localizer.createUserDirs();
    localizer.createJobDirs();

    JobConf jConf = new JobConf(jobConf);
    localizer.createWorkDir(jConf);
    //copy the credential file
    Path localJobTokenFile = lDirAlloc.getLocalPathForWrite(
        TaskTracker.getLocalJobTokenFile(user, jobid), getConf());
    FileUtil.copy(
        localFs, credentials, localFs, localJobTokenFile, false, getConf());


    //setup the user logs dir
    localizer.initializeJobLogDir();

    // Download the job.jar for this job from the system FS
    // setup the distributed cache
    // write job acls
    // write localized config
    localizer.localizeJobFiles(JobID.forName(jobid), jConf, localJobTokenFile, 
                               taskTracker);
  }

 

 

 

  // In JobLocalizer.java

  public void localizeJobFiles(JobID jobid, JobConf jConf,
      Path localJobTokenFile, TaskUmbilicalProtocol taskTracker)
      throws IOException, InterruptedException {
    localizeJobFiles(jobid, jConf,
        lDirAlloc.getLocalPathForWrite(JOBCONF, ttConf), localJobTokenFile,
        taskTracker);
  }

  public void localizeJobFiles(final JobID jobid, JobConf jConf,
      Path localJobFile, Path localJobTokenFile,
      final TaskUmbilicalProtocol taskTracker) 
  throws IOException, InterruptedException {
    // Download the job.jar for this job from the system FS
    localizeJobJarFile(jConf);

    jConf.set(JOB_LOCAL_CTXT, ttConf.get(JOB_LOCAL_CTXT));

    //update the config some more
    jConf.set(TokenCache.JOB_TOKENS_FILENAME, localJobTokenFile.toString());
    jConf.set(JobConf.MAPRED_LOCAL_DIR_PROPERTY, 
        ttConf.get(JobConf.MAPRED_LOCAL_DIR_PROPERTY));
    TaskTracker.resetNumTasksPerJvm(jConf);

    //setup the distributed cache
    final long[] sizes = downloadPrivateCache(jConf);
    if (sizes != null) {
      //the following doAs is required because the DefaultTaskController
      //calls the localizeJobFiles method in the context of the TaskTracker
      //process. The JVM authorization check would fail without this
      //doAs. In the LinuxTC case, this doesn't harm.
      UserGroupInformation ugi = 
        UserGroupInformation.createRemoteUser(jobid.toString());
      ugi.doAs(new PrivilegedExceptionAction<Object>() { 
        public Object run() throws IOException {
          taskTracker.updatePrivateDistributedCacheSizes(jobid, sizes);
          return null;
        }
      });
      
    }

    // Create job-acls.xml file in job userlog dir and write the needed
    // info for authorization of users for viewing task logs of this job.
    writeJobACLs(jConf, new Path(TaskLog.getJobDir(jobid).toURI().toString()));

    //write the updated jobConf file in the job directory
    JobLocalizer.writeLocalJobFile(localJobFile, jConf);
  }

  /**
   * Download the job jar file from FS to the local file system and unjar it.
   * Set the local jar file in the passed configuration.
   *
   * @param localJobConf
   * @throws IOException
   */
  private void localizeJobJarFile(JobConf localJobConf) throws IOException {
    // copy Jar file to the local FS and unjar it.
    String jarFile = localJobConf.getJar();
    FileStatus status = null;
    long jarFileSize = -1;
    if (jarFile != null) {
      Path jarFilePath = new Path(jarFile);
      FileSystem userFs = jarFilePath.getFileSystem(localJobConf);
      try {
        status = userFs.getFileStatus(jarFilePath);
        jarFileSize = status.getLen();
      } catch (FileNotFoundException fe) {
        jarFileSize = -1;
      }
      // Here we check for five times the size of jarFileSize to accommodate for
      // unjarring the jar file in the jars directory
      Path localJarFile =
        lDirAlloc.getLocalPathForWrite(JARDST, 5 * jarFileSize, ttConf);

      //Download job.jar
      userFs.copyToLocalFile(jarFilePath, localJarFile);
      localJobConf.setJar(localJarFile.toString());
      // Also un-jar the job.jar files. We un-jar it so that classes inside
      // sub-directories, for e.g., lib/, classes/ are available on class-path
      RunJar.unJar(new File(localJarFile.toString()),
          new File(localJarFile.getParent().toString()));
      FileUtil.chmod(localJarFile.getParent().toString(), "ugo+rx", true);
    }
  }
  •  Third, it create an instance of TaskRunner to run the task
    /**
     * Kick off the task execution
     */
    public synchronized void launchTask(RunningJob rjob) throws IOException {
      if (this.taskStatus.getRunState() == TaskStatus.State.UNASSIGNED ||
          this.taskStatus.getRunState() == TaskStatus.State.FAILED_UNCLEAN ||
          this.taskStatus.getRunState() == TaskStatus.State.KILLED_UNCLEAN) {
        localizeTask(task);
        if (this.taskStatus.getRunState() == TaskStatus.State.UNASSIGNED) {
          this.taskStatus.setRunState(TaskStatus.State.RUNNING);
        }
        setTaskRunner(task.createRunner(TaskTracker.this, this, rjob));
        this.runner.start();
        long now = System.currentTimeMillis();
        this.taskStatus.setStartTime(now);
        this.lastProgressReport = now;
      } else {
        LOG.info("Not launching task: " + task.getTaskID() + 
            " since it's state is " + this.taskStatus.getRunState());
      }
    }

 

TaskRunner launches a new JVM to run each task, so that any bugs in the user-defined map and reduce functions don't affect the tasktracker. However, it's possible to reuse the JVM between tasks.

  // In TaskRunner.java
  @Override
  public final void run() {
    String errorInfo = "Child Error";
    try {
      
      //before preparing the job localize 
      //all the archives
      TaskAttemptID taskid = t.getTaskID();
      final LocalDirAllocator lDirAlloc = new LocalDirAllocator("mapred.local.dir");
      //simply get the location of the workDir and pass it to the child. The
      //child will do the actual dir creation
      final File workDir =
      new File(new Path(localdirs[rand.nextInt(localdirs.length)], 
          TaskTracker.getTaskWorkDir(t.getUser(), taskid.getJobID().toString(), 
          taskid.toString(),
          t.isTaskCleanupTask())).toString());
      
      String user = tip.getUGI().getUserName();
      
      // Set up the child task's configuration. After this call, no localization
      // of files should happen in the TaskTracker's process space. Any changes to
      // the conf object after this will NOT be reflected to the child.
      // setupChildTaskConfiguration(lDirAlloc);

      if (!prepare()) {
        return;
      }
      
      // Accumulates class paths for child.
      List<String> classPaths = getClassPaths(conf, workDir,
                                              taskDistributedCacheManager);

      long logSize = TaskLog.getTaskLogLength(conf);
      
      //  Build exec child JVM args.
      Vector<String> vargs = getVMArgs(taskid, workDir, classPaths, logSize);
      
      tracker.addToMemoryManager(t.getTaskID(), t.isMapTask(), conf);

      // set memory limit using ulimit if feasible and necessary ...
      String setup = getVMSetupCmd();
      // Set up the redirection of the task's stdout and stderr streams
      File[] logFiles = prepareLogFiles(taskid, t.isTaskCleanupTask());
      File stdout = logFiles[0];
      File stderr = logFiles[1];
      tracker.getTaskTrackerInstrumentation().reportTaskLaunch(taskid, stdout,
                 stderr);
      
      Map<String, String> env = new HashMap<String, String>();
      errorInfo = getVMEnvironment(errorInfo, user, workDir, conf, env, taskid,
                                   logSize);
      
      // flatten the env as a set of export commands
      List <String> setupCmds = new ArrayList<String>();
      for(Entry<String, String> entry : env.entrySet()) {
        StringBuffer sb = new StringBuffer();
        sb.append("export ");
        sb.append(entry.getKey());
        sb.append("=\"");
        sb.append(entry.getValue());
        sb.append("\"");
        setupCmds.add(sb.toString());
      }
      setupCmds.add(setup);
      
      launchJvmAndWait(setupCmds, vargs, stdout, stderr, logSize, workDir);
      tracker.getTaskTrackerInstrumentation().reportTaskEnd(t.getTaskID());
      if (exitCodeSet) {
        if (!killed && exitCode != 0) {
          if (exitCode == 65) {
            tracker.getTaskTrackerInstrumentation().taskFailedPing(t.getTaskID());
          }
          throw new IOException("Task process exit with nonzero status of " +
              exitCode + ".");
        }
      }
    } catch (FSError e) {
      LOG.fatal("FSError", e);
      try {
        tracker.fsErrorInternal(t.getTaskID(), e.getMessage());
      } catch (IOException ie) {
        LOG.fatal(t.getTaskID()+" reporting FSError", ie);
      }
    } catch (Throwable throwable) {
      LOG.warn(t.getTaskID() + " : " + errorInfo, throwable);
      Throwable causeThrowable = new Throwable(errorInfo, throwable);
      ByteArrayOutputStream baos = new ByteArrayOutputStream();
      causeThrowable.printStackTrace(new PrintStream(baos));
      try {
        tracker.reportDiagnosticInfoInternal(t.getTaskID(), baos.toString());
      } catch (IOException e) {
        LOG.warn(t.getTaskID()+" Reporting Diagnostics", e);
      }
    } finally {
      
      // It is safe to call TaskTracker.TaskInProgress.reportTaskFinished with
      // *false* since the task has either
      // a) SUCCEEDED - which means commit has been done
      // b) FAILED - which means we do not need to commit
      tip.reportTaskFinished(false);
    }
  }

 

 

 

  // In LinuxTaskController
  @Override
  public void initializeJob(String user, String jobid, Path credentials,
                            Path jobConf, TaskUmbilicalProtocol taskTracker,
                            InetSocketAddress ttAddr
                            ) throws IOException {
    List<String> command = new ArrayList<String>(
      Arrays.asList(taskControllerExe, 
                    user,
                    localStorage.getDirsString(),
                    Integer.toString(Commands.INITIALIZE_JOB.getValue()),
                    jobid,
                    credentials.toUri().getPath().toString(),
                    jobConf.toUri().getPath().toString()));
    File jvm =                                  // use same jvm as parent
      new File(new File(System.getProperty("java.home"), "bin"), "java");
    command.add(jvm.toString());
    command.add("-classpath");
    command.add(System.getProperty("java.class.path"));
    command.add("-Dhadoop.log.dir=" + TaskLog.getBaseLogDir());
    command.add("-Dhadoop.root.logger=INFO,console");
    command.add("-Djava.library.path=" +
                System.getProperty("java.library.path"));
    command.add(JobLocalizer.class.getName());  // main of JobLocalizer
    command.add(user);
    command.add(jobid);
    // add the task tracker's reporting address
    command.add(ttAddr.getHostName());
    command.add(Integer.toString(ttAddr.getPort()));
    String[] commandArray = command.toArray(new String[0]);
    ShellCommandExecutor shExec = new ShellCommandExecutor(commandArray);
    if (LOG.isDebugEnabled()) {
      LOG.debug("initializeJob: " + Arrays.toString(commandArray));
    }
    try {
      shExec.execute();
      if (LOG.isDebugEnabled()) {
        logOutput(shExec.getOutput());
      }
    } catch (ExitCodeException e) {
      int exitCode = shExec.getExitCode();
      logOutput(shExec.getOutput());
      throw new IOException("Job initialization failed (" + exitCode + 
          ") with output: " + shExec.getOutput(), e);
    }
  }

 

// In JvmManager.java
public void launchJvm(TaskRunner t, JvmEnv env
                        ) throws IOException, InterruptedException {
    if (t.getTask().isMapTask()) {
      mapJvmManager.reapJvm(t, env);
    } else {
      reduceJvmManager.reapJvm(t, env);
    }
  }

  

 

The child process communicates with its parent through the umbilical interface, it informs the parent of the task's progress every few seconds until the task is complete. You can chech the run()method in MapTask or ReduceTask. After each phase complete, TaskTracker use the umnilical interface to report status update to its parent. (TaskTracker implements tthe interface TaskUmbilicalProtocol). 

 

/**
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.hadoop.mapred;

import java.io.IOException;

import org.apache.hadoop.ipc.VersionedProtocol;
import org.apache.hadoop.mapred.JvmTask;
import org.apache.hadoop.mapreduce.security.token.JobTokenSelector;
import org.apache.hadoop.security.token.TokenInfo;

/** Protocol that task child process uses to contact its parent process.  The
 * parent is a daemon which which polls the central master for a new map or
 * reduce task and runs it as a child process.  All communication between child
 * and parent is via this protocol. */
@TokenInfo(JobTokenSelector.class)
public interface TaskUmbilicalProtocol extends VersionedProtocol {

  /** 
   * Changed the version to 2, since we have a new method getMapOutputs 
   * Changed version to 3 to have progress() return a boolean
   * Changed the version to 4, since we have replaced 
   *         TaskUmbilicalProtocol.progress(String, float, String, 
   *         org.apache.hadoop.mapred.TaskStatus.Phase, Counters) 
   *         with statusUpdate(String, TaskStatus)
   * 
   * Version 5 changed counters representation for HADOOP-2248
   * Version 6 changes the TaskStatus representation for HADOOP-2208
   * Version 7 changes the done api (via HADOOP-3140). It now expects whether
   *           or not the task's output needs to be promoted.
   * Version 8 changes {job|tip|task}id's to use their corresponding 
   * objects rather than strings.
   * Version 9 changes the counter representation for HADOOP-1915
   * Version 10 changed the TaskStatus format and added reportNextRecordRange
   *            for HADOOP-153
   * Version 11 Adds RPCs for task commit as part of HADOOP-3150
   * Version 12 getMapCompletionEvents() now also indicates if the events are 
   *            stale or not. Hence the return type is a class that 
   *            encapsulates the events and whether to reset events index.
   * Version 13 changed the getTask method signature for HADOOP-249
   * Version 14 changed the getTask method signature for HADOOP-4232
   * Version 15 Adds FAILED_UNCLEAN and KILLED_UNCLEAN states for HADOOP-4759
   * Version 16 Added numRequiredSlots to TaskStatus for MAPREDUCE-516
   * Version 17 Change in signature of getTask() for HADOOP-5488
   * Version 18 Added fatalError for child to communicate fatal errors to TT
   * Version 19 Added jvmContext to most method signatures for MAPREDUCE-2429
   * */

  public static final long versionID = 19L;
  
  /**
   * Called when a child task process starts, to get its task.
   * @param context the JvmContext of the JVM w.r.t the TaskTracker that
   *        launched it
   * @return Task object
   * @throws IOException 
   */
  JvmTask getTask(JvmContext context) throws IOException;

  /**
   * Report child's progress to parent.
   * 
   * @param taskId task-id of the child
   * @param taskStatus status of the child
   * @param jvmContext context the jvmContext running the task.
   * @throws IOException
   * @throws InterruptedException
   * @return True if the task is known
   */
  boolean statusUpdate(TaskAttemptID taskId, TaskStatus taskStatus,
      JvmContext jvmContext) throws IOException, InterruptedException;
  
  /** Report error messages back to parent.  Calls should be sparing, since all
   *  such messages are held in the job tracker.
   *  @param taskid the id of the task involved
   *  @param trace the text to report
   *  @param jvmContext context the jvmContext running the task.
   */
  void reportDiagnosticInfo(TaskAttemptID taskid, String trace,
      JvmContext jvmContext) throws IOException;
  
  /**
   * Report the record range which is going to process next by the Task.
   * @param taskid the id of the task involved
   * @param range the range of record sequence nos
   * @param jvmContext context the jvmContext running the task.
   * @throws IOException
   */
  void reportNextRecordRange(TaskAttemptID taskid, SortedRanges.Range range,
      JvmContext jvmContext) throws IOException;

  /** Periodically called by child to check if parent is still alive.
   * @param taskid the id of the task involved
   * @param jvmContext context the jvmContext running the task.
   * @return True if the task is known
   */
  boolean ping(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;

  /** Report that the task is successfully completed.  Failure is assumed if
   * the task process exits without calling this.
   * @param taskid task's id
   * @param jvmContext context the jvmContext running the task.
   */
  void done(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;
  
  /** 
   * Report that the task is complete, but its commit is pending.
   * 
   * @param taskId task's id
   * @param taskStatus status of the child
   * @param jvmContext context the jvmContext running the task.
   * @throws IOException
   */
  void commitPending(TaskAttemptID taskId, TaskStatus taskStatus,
      JvmContext jvmContext) throws IOException, InterruptedException;  

  /**
   * Polling to know whether the task can go-ahead with commit 
   * @param taskid
   * @param jvmContext context the jvmContext running the task.
   * @return true/false 
   * @throws IOException
   */
  boolean canCommit(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;

  /** Report that a reduce-task couldn't shuffle map-outputs. */
  void shuffleError(TaskAttemptID taskId, String message, JvmContext jvmContext)
      throws IOException;
  
  /** Report that the task encounted a local filesystem error.*/
  void fsError(TaskAttemptID taskId, String message, JvmContext jvmContext)
      throws IOException;

  /** Report that the task encounted a fatal error.*/
  void fatalError(TaskAttemptID taskId, String message, JvmContext jvmContext)
      throws IOException;
  
  /** Called by a reduce task to get the map output locations for finished maps.
   * Returns an update centered around the map-task-completion-events. 
   * The update also piggybacks the information whether the events copy at the 
   * task-tracker has changed or not. This will trigger some action at the 
   * child-process.
   *
   * @param jobId the reducer job id
   * @param fromIndex the index starting from which the locations should be 
   * fetched
   * @param maxLocs the max number of locations to fetch
   * @param id The attempt id of the task that is trying to communicate
   * @return A {@link MapTaskCompletionEventsUpdate} 
   */
  MapTaskCompletionEventsUpdate getMapCompletionEvents(JobID jobId, 
                                                       int fromIndex, 
                                                       int maxLocs,
                                                       TaskAttemptID id,
                                                       JvmContext jvmContext) 
  throws IOException;

  /**
   * The job initializer needs to report the sizes of the archive
   * objects and directories in the private distributed cache.
   * @param jobId the job to update
   * @param sizes the array of sizes that were computed
   * @throws IOException
   */
  void updatePrivateDistributedCacheSizes(org.apache.hadoop.mapreduce.JobID jobId,
                                          long[] sizes) throws IOException;
}

 

 

 

 

 

 

分享到:
评论

相关推荐

    关于__Federico Milano 的电力系统分析工具箱.zip

    1.版本:matlab2014/2019a/2021a 2.附赠案例数据可直接运行matlab程序。 3.代码特点:参数化编程、参数可方便更改、代码编程思路清晰、注释明细。 4.适用对象:计算机,电子信息工程、数学等专业的大学生课程设计、期末大作业和毕业设计。

    mlab-upenn 研究小组的心脏模型模拟.zip

    1.版本:matlab2014/2019a/2021a 2.附赠案例数据可直接运行matlab程序。 3.代码特点:参数化编程、参数可方便更改、代码编程思路清晰、注释明细。 4.适用对象:计算机,电子信息工程、数学等专业的大学生课程设计、期末大作业和毕业设计。

    混合图像创建大师matlab代码.zip

    1.版本:matlab2014/2019a/2021a 2.附赠案例数据可直接运行matlab程序。 3.代码特点:参数化编程、参数可方便更改、代码编程思路清晰、注释明细。 4.适用对象:计算机,电子信息工程、数学等专业的大学生课程设计、期末大作业和毕业设计。

    中序遍历二叉树-java版本

    在Java中,实现二叉树的中序遍历同样可以通过递归来完成。中序遍历的顺序是:首先递归地中序遍历左子树,然后访问根节点,最后递归地中序遍历右子树。 在这段代码中,Node类定义了二叉树的节点,BinaryTree类包含一个指向根节点的指针和inOrder方法,用于递归地进行中序遍历。printInOrder方法调用inOrder方法并打印出遍历的结果。 在Main类中,我们创建了一个示例二叉树,并调用printInOrder方法来输出中序遍历的结果。输出应该是:4 2 5 1 3,这表示中序遍历的顺序是左子树(4),然后是根节点(2),接着是右子树的左子树(5),然后是右子树的根节点(1),最后是右子树的右子树(3)。

    无头单向非循环链表的实现(SList.c)

    无头单向非循环链表的实现(函数定义文件)

    两个有序链表的合并pta

    "PTA" 通常指的是一种在线编程平台,例如“Pata”或者某些特定学校或组织的编程练习与自动评测系统。在这种平台或系统中,学生或程序员会提交代码来解决各种问题,然后系统会自动运行并评测这些代码的正确性。 当提到“两个有序链表的合并PTA”时,这通常意味着在PTA平台上解决一个特定的问题,即合并两个有序链表。具体任务可能是给定两个已按升序排序的链表,要求编写代码来合并这两个链表,形成一个新的有序链表。

    在 Matlab 中创建的图形工具可改善航空航天数据的可视化.zip

    1.版本:matlab2014/2019a/2021a 2.附赠案例数据可直接运行matlab程序。 3.代码特点:参数化编程、参数可方便更改、代码编程思路清晰、注释明细。 4.适用对象:计算机,电子信息工程、数学等专业的大学生课程设计、期末大作业和毕业设计。

    搜索引擎的设计与实现.zip

    搜索引擎的设计与实现

    年公司财务会计岗位工作总结(二).docx

    工作总结,新年计划,岗位总结,工作汇报,个人总结,述职报告,范文下载,新年总结,新建计划。

    【基于Springboot+Vue的Java毕业设计】无人超市管理系统项目实战(源码+录像演示+说明).rar

    【基于Springboot+Vue的Java毕业设计】无人超市管理系统项目实战(源码+录像演示+说明).rar 【项目技术】 开发语言:Java 框架:Spingboot+vue 架构:B/S 数据库:mysql 【演示视频-编号:314】 https://pan.quark.cn/s/8dea014f4d36 【实现功能】 无人超市管理系统有管理员,用户两个角色。管理员功能有个人中心,用户管理,商品类型管理,支付类型管理,公告类型管理,商品信息管理,出入库管理,出入库详情管理,购买管理,购买详情管理,公告信息管理。用户可以注册登录,自助购买,点击购买管理里面收银就可以选择支付类型和商品然后提交,还可以查看购买详情和公告信息。

    电视的半盲图像去模糊问题,.zip

    1.版本:matlab2014/2019a/2021a 2.附赠案例数据可直接运行matlab程序。 3.代码特点:参数化编程、参数可方便更改、代码编程思路清晰、注释明细。 4.适用对象:计算机,电子信息工程、数学等专业的大学生课程设计、期末大作业和毕业设计。

    公司年会基本流程表.doc

    年会班会资料,节目策划,游戏策划,策划案,策划方案,活动方案,筹办,公司年会,开场白,主持人,策划主题,主持词,小游戏。

    5G智慧港口解决方案.pptx

    在现有省、市港口信息化系统进行有效整合基础上,借鉴新 一代的感知-传输-应用技术体系,实现对码头、船舶、货物、重 大危险源、危险货物装卸过程、航管航运等管理要素的全面感知、 有效传输和按需定制服务,为行政管理人员和相关单位及人员提 供高效的管理辅助,并为公众提供便捷、实时的水运信息服务。 建立信息整合、交换和共享机制,建立健全信息化管理支撑 体系,以及相关标准规范和安全保障体系;按照“绿色循环低碳” 交通的要求,搭建高效、弹性、高可扩展性的基于虚拟技术的信 息基础设施,支撑信息平台低成本运行,实现电子政务建设和服务模式的转变。 实现以感知港口、感知船舶、感知货物为手段,以港航智能 分析、科学决策、高效服务为目的和核心理念,构建“智慧港口”的发展体系。 结合“智慧港口”相关业务工作特点及信息化现状的实际情况,本项目具体建设目标为: 一张图(即GIS 地理信息服务平台) 在建设岸线、港口、港区、码头、泊位等港口主要基础资源图层上,建设GIS 地理信息服务平台,在此基础上依次接入和叠加规划建设、经营、安全、航管等相关业务应用专题数据,并叠 加动态数据,如 AIS/GPS/移动平台数据,逐步建成航运管理处 "一张图"。系统支持扩展框架,方便未来更多应用资源的逐步整合。 现场执法监管系统 基于港口(航管)执法基地建设规划,依托统一的执法区域 管理和数字化监控平台,通过加强对辖区内的监控,结合移动平 台,形成完整的多维路径和信息追踪,真正做到问题能发现、事态能控制、突发问题能解决。 运行监测和辅助决策系统 对区域港口与航运业务日常所需填报及监测的数据经过科 学归纳及分析,采用统一平台,消除重复的填报数据,进行企业 输入和自动录入,并进行系统智能判断,避免填入错误的数据, 输入的数据经过智能组合,自动生成各业务部门所需的数据报 表,包括字段、格式,都可以根据需要进行定制,同时满足扩展 性需要,当有新的业务监测数据表需要产生时,系统将分析新的 需求,将所需字段融合进入日常监测和决策辅助平台的统一平台中,并生成新的所需业务数据监测及决策表。 综合指挥调度系统 建设以港航应急指挥中心为枢纽,以各级管理部门和经营港 口企业为节点,快速调度、信息共享的通信网络,满足应急处置中所需要的信息采集、指挥调度和过程监控等通信保障任务。 设计思路 根据项目的建设目标和“智慧港口”信息化平台的总体框架、 设计思路、建设内容及保障措施,围绕业务协同、信息共享,充 分考虑各航运(港政)管理处内部管理的需求,平台采用“全面 整合、重点补充、突出共享、逐步完善”策略,加强重点区域或 运输通道交通基础设施、运载装备、运行环境的监测监控,完善 运行协调、应急处置通信手段,促进跨区域、跨部门信息共享和业务协同。 以“统筹协调、综合监管”为目标,以提供综合、动态、实 时、准确、实用的安全畅通和应急数据共享为核心,围绕“保畅通、抓安全、促应急"等实际需求来建设智慧港口信息化平台。 系统充分整合和利用航运管理处现有相关信息资源,以地理 信息技术、网络视频技术、互联网技术、移动通信技术、云计算 技术为支撑,结合航运管理处专网与行业数据交换平台,构建航 运管理处与各部门之间智慧、畅通、安全、高效、绿色低碳的智 慧港口信息化平台。 系统充分考虑航运管理处安全法规及安全职责今后的变化 与发展趋势,应用目前主流的、成熟的应用技术,内联外引,优势互补,使系统建设具备良好的开放性、扩展性、可维护性。

    【基于Java+Springboot的毕业设计】线上医院挂号系统(源码+演示视频+说明).rar

    【基于Java+Springboot的毕业设计】线上医院挂号系统(源码+演示视频+说明).rar 【项目技术】 开发语言:Java 框架:Spingboot+vue 架构:B/S 数据库:mysql 【演示视频-编号:300】 https://pan.quark.cn/s/8dea014f4d36 【实现功能】 本次开发的线上医院挂号系统实现了字典管理、论坛管理、会员管理、单页数据管理、医生管理、医生留言管理、医生挂号订单管理、管理员管理等功能。

    年网通营业员个人工作总结.docx

    工作总结,新年计划,岗位总结,工作汇报,个人总结,述职报告,范文下载,新年总结,新建计划。

    财务数据分析模型3.xlsx

    Excel数据看板,Excel办公模板,Excel模板下载,Excel数据统计,数据展示

    最全英语六级真题(从12年到23年总共66个真题)

    最全英语六级真题,从12年到23年总共66个真题。全网最全。

    财务助理实习总结(2).docx

    工作总结,新年计划,岗位总结,工作汇报,个人总结,述职报告,范文下载,新年总结,新建计划。

    基于深度学习的人体姿态识别.zip

    基于深度学习的人体姿态识别.zip

    01. XX塑业有限公司ERP物料编码规则(DOC 6页).doc

    01. XX塑业有限公司ERP物料编码规则(DOC 6页).doc

Global site tag (gtag.js) - Google Analytics