首页 诗词 字典 板报 句子 名言 友答 励志 学校 网站地图
当前位置: 首页 > 教程频道 > 软件管理 > 软件架构设计 >

Clsssic MapReduce (MapReduce 一) - Job initialization

2013-07-01 
Clsssic MapReduce (MapReduce 1) - Job initializationWhen the JobTracker recieves a call to its subm

Clsssic MapReduce (MapReduce 1) - Job initialization

When the JobTracker recieves a call to its submitJob(...) method, it first checks if JobTracker is in SafeMode

  private void checkSafeMode() throws SafeModeException {    if (isInSafeMode()) {      try {        throw new SafeModeException((            isInAdminSafeMode()) ? adminSafeModeUser : null);      } catch (SafeModeException sfe) {        LOG.info("JobTracker in safe-mode, aborting operation", sfe);        throw sfe;      }    }  }

?

?

?

then it puts it into a internal queue from where the job scheduler will pick it up and initiallize it.

  // All the known jobs.  (jobid->JobInProgress)  Map<JobID, JobInProgress> jobs =      Collections.synchronizedMap(new TreeMap<JobID, JobInProgress>());

?

    // Create the JobInProgress, do not lock the JobTracker since    // we are about to copy job.xml from HDFS    JobInProgress job = null;    try {      job = new JobInProgress(this, this.conf, jobInfo, 0, ts);    } catch (Exception e) {      throw new IOException(e);    }        synchronized (this) {      // check if queue is RUNNING      String queue = job.getProfile().getQueueName();      if (!queueManager.isRunning(queue)) {        throw new IOException("Queue "" + queue + "" is not running");      }      try {        aclsManager.checkAccess(job, ugi, Operation.SUBMIT_JOB);      } catch (IOException ioe) {        LOG.warn("Access denied for user " + job.getJobConf().getUser()            + ". Ignoring job " + jobId, ioe);        job.fail();        throw ioe;      }      // Check the job if it cannot run in the cluster because of invalid memory      // requirements.      try {        checkMemoryRequirements(job);      } catch (IOException ioe) {        throw ioe;      }      if (!recovered) {        // Store the information in a file so that the job can be recovered        // later (if at all)        Path jobDir = getSystemDirectoryForJob(jobId);        FileSystem.mkdirs(fs, jobDir, new FsPermission(SYSTEM_DIR_PERMISSION));        FSDataOutputStream out = fs.create(getSystemFileForJob(jobId));        jobInfo.write(out);        out.close();      }      try {        this.taskScheduler.checkJobSubmission(job);      } catch (IOException ioe){        LOG.error("Problem in submitting job " + jobId, ioe);        throw ioe;      }      // Submit the job      JobStatus status;      try {        status = addJob(jobId, job);      } catch (IOException ioe) {        LOG.info("Job " + jobId + " submission failed!", ioe);        status = job.getStatus();        status.setFailureInfo(StringUtils.stringifyException(ioe));        failJob(job);        throw ioe;      }      return status;    }

?

?

Initialization involves creating an object to represent the job being run, which encapsulates its tasks and bookkeeping information to keep track of the status and progress of its tasks.

    // Create the JobInProgress, do not lock the JobTracker since    // we are about to copy job.xml from HDFS    JobInProgress job = null;    try {      job = new JobInProgress(this, this.conf, jobInfo, 0, ts);    } catch (Exception e) {      throw new IOException(e);    }

?

To create the list of tasks to run, the job scheduler first retrieves the input splits computed by the client from the shared filesystem.?It then create one map task for each split. The number of reduce tasks to create is determined by the mapred.reduce.tasks property in the Job, which is set by the setNumReduceTasks() method, and the scheduler simply creates this number of reduce tasks to be run. Tasks are given IDs at this point. Check JobInProgress#initTasks(), which is a callback method of TaskScheduler inside JobTracker. The TashSheduler maintains a job queue. Once a job is pushed into the queue, it will call its initTasks callback method.

  /**   * Construct the splits, etc.  This is invoked from an async   * thread so that split-computation doesn't block anyone.   */  public synchronized void initTasks()   throws IOException, KillInterruptedException, UnknownHostException {    if (tasksInited || isComplete()) {      return;    }    synchronized(jobInitKillStatus){      if(jobInitKillStatus.killed || jobInitKillStatus.initStarted) {        return;      }      jobInitKillStatus.initStarted = true;    }    LOG.info("Initializing " + jobId);    final long startTimeFinal = this.startTime;    // log job info as the user running the job    try {    userUGI.doAs(new PrivilegedExceptionAction<Object>() {      @Override      public Object run() throws Exception {        JobHistory.JobInfo.logSubmitted(getJobID(), conf, jobFile,             startTimeFinal, hasRestarted());        return null;      }    });    } catch(InterruptedException ie) {      throw new IOException(ie);    }        // log the job priority    setPriority(this.priority);        //    // generate security keys needed by Tasks    //    generateAndStoreTokens();        //    // read input splits and create a map per a split    //    TaskSplitMetaInfo[] splits = createSplits(jobId);    if (numMapTasks != splits.length) {      throw new IOException("Number of maps in JobConf doesn't match number of " +      "recieved splits for job " + jobId + "! " +      "numMapTasks=" + numMapTasks + ", #splits=" + splits.length);    }    numMapTasks = splits.length;    // Sanity check the locations so we don't create/initialize unnecessary tasks    for (TaskSplitMetaInfo split : splits) {      NetUtils.verifyHostnames(split.getLocations());    }        jobtracker.getInstrumentation().addWaitingMaps(getJobID(), numMapTasks);    jobtracker.getInstrumentation().addWaitingReduces(getJobID(), numReduceTasks);    this.queueMetrics.addWaitingMaps(getJobID(), numMapTasks);    this.queueMetrics.addWaitingReduces(getJobID(), numReduceTasks);    maps = new TaskInProgress[numMapTasks];    for(int i=0; i < numMapTasks; ++i) {      inputLength += splits[i].getInputDataLength();      maps[i] = new TaskInProgress(jobId, jobFile,                                    splits[i],                                    jobtracker, conf, this, i, numSlotsPerMap);    }    LOG.info("Input size for job " + jobId + " = " + inputLength        + ". Number of splits = " + splits.length);    // Set localityWaitFactor before creating cache    localityWaitFactor =       conf.getFloat(LOCALITY_WAIT_FACTOR, DEFAULT_LOCALITY_WAIT_FACTOR);    if (numMapTasks > 0) {       nonRunningMapCache = createCache(splits, maxLevel);    }    // set the launch time    this.launchTime = jobtracker.getClock().getTime();    //    // Create reduce tasks    //    this.reduces = new TaskInProgress[numReduceTasks];    for (int i = 0; i < numReduceTasks; i++) {      reduces[i] = new TaskInProgress(jobId, jobFile,                                       numMapTasks, i,                                       jobtracker, conf, this, numSlotsPerReduce);      nonRunningReduces.add(reduces[i]);    }    // Calculate the minimum number of maps to be complete before     // we should start scheduling reduces    completedMapsForReduceSlowstart =       (int)Math.ceil(          (conf.getFloat("mapred.reduce.slowstart.completed.maps",                          DEFAULT_COMPLETED_MAPS_PERCENT_FOR_REDUCE_SLOWSTART) *            numMapTasks));        // ... use the same for estimating the total output of all maps    resourceEstimator.setThreshhold(completedMapsForReduceSlowstart);        // create cleanup two cleanup tips, one map and one reduce.    cleanup = new TaskInProgress[2];    // cleanup map tip. This map doesn't use any splits. Just assign an empty    // split.    TaskSplitMetaInfo emptySplit = JobSplit.EMPTY_TASK_SPLIT;    cleanup[0] = new TaskInProgress(jobId, jobFile, emptySplit,             jobtracker, conf, this, numMapTasks, 1);    cleanup[0].setJobCleanupTask();    // cleanup reduce tip.    cleanup[1] = new TaskInProgress(jobId, jobFile, numMapTasks,                       numReduceTasks, jobtracker, conf, this, 1);    cleanup[1].setJobCleanupTask();    // create two setup tips, one map and one reduce.    setup = new TaskInProgress[2];    // setup map tip. This map doesn't use any split. Just assign an empty    // split.    setup[0] = new TaskInProgress(jobId, jobFile, emptySplit,             jobtracker, conf, this, numMapTasks + 1, 1);    setup[0].setJobSetupTask();    // setup reduce tip.    setup[1] = new TaskInProgress(jobId, jobFile, numMapTasks,                       numReduceTasks + 1, jobtracker, conf, this, 1);    setup[1].setJobSetupTask();        synchronized(jobInitKillStatus){      jobInitKillStatus.initDone = true;      // set this before the throw to make sure cleanup works properly      tasksInited = true;      if(jobInitKillStatus.killed) {        throw new KillInterruptedException("Job " + jobId + " killed in init");      }    }        JobHistory.JobInfo.logInited(profile.getJobID(), this.launchTime,                                  numMapTasks, numReduceTasks);       // Log the number of map and reduce tasks   LOG.info("Job " + jobId + " initialized successfully with " + numMapTasks            + " map tasks and " + numReduceTasks + " reduce tasks.");  }

?

As you can see in above code, in addition to the map and reduce tasks, two further tasks are created: a job setup task and a job cleanup task. These are run by tasktrackers and are used to run code to setup the job before any map tasks run, and to cleanup after all the reduce tasks are complete.

?

The OutputCommitter that is configured for the job determines where the code to be run, and by default this is a FileOutputCommitter. For the job setup task, it will create the final output directory for the job and the temporary working space for the task output,

  /**   * Move the files from the work directory to the job output directory   * @param context the task context   */  public void commitTask(TaskAttemptContext context)   throws IOException {    TaskAttemptID attemptId = context.getTaskAttemptID();    if (workPath != null) {      context.progress();      if (outputFileSystem.exists(workPath)) {        // Move the task outputs to their final place        moveTaskOutputs(context, outputFileSystem, outputPath, workPath);        // Delete the temporary task-specific output directory        if (!outputFileSystem.delete(workPath, true)) {          LOG.warn("Failed to delete the temporary output" +           " directory of task: " + attemptId + " - " + workPath);        }        LOG.info("Saved output of task '" + attemptId + "' to " +                  outputPath);      }    }  }

?and for the job cleanup task it will delete the temporary working space for the task output.

  @Override  @Deprecated  public void cleanupJob(JobContext context) throws IOException {    if (outputPath != null) {      Path tmpDir = new Path(outputPath, FileOutputCommitter.TEMP_DIR_NAME);      FileSystem fileSys = tmpDir.getFileSystem(context.getConfiguration());      if (fileSys.exists(tmpDir)) {        fileSys.delete(tmpDir, true);      }    } else {      LOG.warn("Output path is null in cleanup");    }  }

?

?

热点排行