Font Size: a A A

A Modified Algorithm For Linear Programming Problem

Posted on:2007-08-25Degree:MasterType:Thesis
Country:ChinaCandidate:X L YuFull Text:PDF
GTID:2120360182496369Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Linear programming, as an important embranchment of operationalresearch, has many applications of social production and scientific research.Many people focused their attentions on it.In 1947, G.B.Dantzig presented Simplex method the first time, whichindicated the foundation of linear programming.In 1950s, the theories of linear programming had been developed andenriched. Some successful application also had been reported. In 1975,more people paid attention to it. Because in this year Sweden RoyalAcademy of Sciences awarded the Nobel prize of economy to Kanorovicand T.C.Koopmans for encouraging their contribution to resourceoptimization distribution theories. The mathematician Haqi firstlypresented the algorithm of solving linear programming problem withpolynomial time in 1979. In theory the algorithm was important, but inpractice it was not effective. The Simplex method became the importantmethod for solving linear programming problem as a result of its effectivein practice. In the following 40 years, many experts did more works onlaws of choosing main element, numerical stabilization, time ofcomputing, capacity of computing, initiating feasible basis and reducingor avoiding circle and they also further extended and applied the method.In 1984, Karmarkar presented "projection measure method" whichwas the most important breakthrough of linear programming from itscoming out. The algorithm not only was proved polynomial time in theory,but also can be mentioned in the same breath with simplex method inpractice. It broke the monopolization of simplex method and made aresearch upsurge of Karmarkar's algorithm in this field.The basic ideal of Karmarkar's algorithm is beginning with an interiorpoint, going through "the middle line" and achieving optimum solution in theinterior of feasible region. For large-scale linear programming problem, withthe increase of constraint conditions and variables, the iterative degree ofKarmarkar's algorithm changes a little which is superior to simplexmethod. Afterward, by more study of Todd, Burrel, Anstreicher, Gay, Ye,Gonzaga, Megiddo, Gill, Monteiro, Adler, Barnes and Ross, Karmarkar'salgorithm has been developed into three methods in what follows.The first algorithm is projecting transformation of potential function(Karmarkar's algorithm). For each iterative step projecting transformationis done firstly which makes the iterative point as center of feasible region,and then at the center we use the steepest descent algorithm for potentialfunction to get optimum solution.The second algorithm is affine measure transformation. For eachiterative step the affine transformation is done firstly, and then thesteepest descent algorithm is used. The basic affine measure method wasfirstly presented by former Russia mathematician I.I.Dikin early in 1967and in 1985 the algorithm was represented by E.Barne, R.Vanderbei,M.Meketon and B.Freedman independently. They indicated to use affinemeasure algorithm to solve the standard formal of former linearprogramming problem and proved its astringency.In 1987, R.Monteiro, I.Adler and M.G.C.Resende,at the same timeM.Kojima, S.Mizuno and A.Yoshise presented primitive—dual affinemeasure method and its theory of polynomial time had been provedsuccessfully.The third is tracking centre locus. "Centre locus" was defined byHuard and Sonnevend. In 1985, Gill and Murray found the method ofinterior point barrier function of nonlinear programming can be used to solvelinear programming problem, and it was equal to Karmarkar's algorithm incertain condition. Many people in academia had much interest in thisresult and extended it rapidly which formed a class of tracking centrelocus methods. The tracking centre locus method combines logarithmbarrier function and Newton iterative to solve linear programmingproblem.The algorithms above are going iteratively in feasible region oflinear programming, so they are called interior point algorithms.All interior point algorithms have a common ground that during theiterative progress all iterative points should be kept in the interior offeasible region and could not reach the boundary. When the iterative pointof these algorithms reach boundary, one or more component of the currentsolution will be zero. According to sensitivity analysis theory of linearprogramming, optimum solution is immovable if we give a littledisturbance to some components of the current solution. So we can let thecomponents of the current solution which equals to zero be a little positivenumber and then we could keep on computing until we achieve theoptimum solution of the linear programming problem. The improved interiorpoint method of linear programming is called feasible point algorithm.This algorithm is a method of moving iterative points in the interior offeasible region, at the same time the iterative points are feasible points.Finally we did some numerical experiments with the feasible pointalgorithm for getting a deep insight and we compared the feasible pointalgorithm with the interior point algorithm according to experimentalresults which tell us the former is better than the latter obviously.
Keywords/Search Tags:linear programming, Karmarkar's algorithm, affine measure algorithm, centre locus method, interior point, feasible point algorithm, numerical experiment
PDF Full Text Request
Related items