Font Size: a A A

Research On Pedestrian Head Protection Based On Headform Impactor And Discussion For The Research Limitation

Posted on:2013-12-06Degree:MasterType:Thesis
Country:ChinaCandidate:J ZhouFull Text:PDF
GTID:2232330374490947Subject:Automotive Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of the information technology, Internet has becomean indispensable part of people’s life. In the background of network informationexplosion, massive data processing becomes a new challenge in computer science.MapReduce is a distributed data processing programming model, its advantage lies inthe simplification of the traditional distributed program development, and developersjust need to focu on business logic program without thinking about the details of thedistributed implementation. Hadoop is the open source of MapReduce, and it providesa data processing foundation platform for enterprise and research institutions of themassive data processing. The researching of MapReduce scheduling algorithm ismainly to solve the problems of cluster sharing, resources utilization and job’sresponse time etc. Meanwhile, with the increasing of users’ real-time requirement, theresearching on the MapReduce real-time scheduling are more and more. The difficultyof MapReduce real-time scheduling is the real-time scheduling model, which shouldconsider the cluster’s heterogeneity and data locality. The tasks’ remaining timeprediction is a mainly part of real-time scheduling and it is often influenced by thecluster’s heterogeneity.By studying the job runtime mechanism of Hadoop, this paper proposes aSelf-Adaptive Reduce Scheduling (SARS) algorithmn. In the current research onMapReduce scheduling, the Reduce task’s scheduling time is too simple. Reduce taskscheduling time directly influences the completion time of the task and the utilizationof cluster. The SARS scheduling algorithm can decide the Reduce tasks’ time by thejob’s own properties. Experiment results show that SARS reduce the jobs’ Reducetasks’ completion time and the cluster jobs’ mean response time. It also improves theutilization of cluster resources.Given the heterogeneity of the cluster, this thesis proposes a node classificationalgorithmn based on computing capacity, to classify the cluster’s nodes which havedistinct computing capacity. Based on the node classification algorithmn, it proposesa scheduling algorithm MTSD (MapReduce Task Scheduling for Deadline constraints).The MTSD includes a tasks’ remaining time model to evaluate remaining time and itdeduces a resources requirement model in real-time job scheduling. Experimentresults show that MTSD algorithm improves the data locality, and also performs wellon the demand of job real-time.
Keywords/Search Tags:Large scale processing, MapReduce, Hadoop, Schedulingalgorithm, Reduce scheduling, heterogeneity, data locality
PDF Full Text Request
Related items