Font Size: a A A

Quantifying shared information value in a supply chain using decentralized Markov decision processes with restricted observations

Posted on:2006-12-11Degree:Ph.DType:Dissertation
University:North Carolina State UniversityCandidate:Wei, WenbinFull Text:PDF
GTID:1459390005997117Subject:Engineering
Abstract/Summary:
Information sharing in a two-stage and three-stage supply chain is studied. Assuming the customer demand distribution is known along the supply chain, the information to be shared is the inventory level of each supply chain member. In order to study the value of shared information, the supply chain is examined under different information sharing schemes. A Markov decision process (MDP) approach is used to model the supply chain, and the optimal policy given each scheme is determined. By comparing these schemes, the value of shared information can be quantified. Since the optimal policy maximizes the total profit within a supply chain, allocation of the profit among supply chain members, or transfer cost/price negotiation, is also discussed.; The information sharing schemes include full information sharing, partial information sharing and no information sharing. In the case of full information sharing, the supply chain problem is modeled as a single agent Markov decision process with complete observations (a traditional MDP) which can be solved based on the policy iteration method of Howard (1960). In the case of partial information sharing or no information sharing, the supply chain problem is modeled as a decentralized Markov decision process with restricted observations (DEC-ROMDP). Each agent may have complete observation of the process, or may have only restricted observation of the process. In order to solve the DEC-ROMDP, an evolutionary coordination algorithm is introduced, which proves to be effective if coupled with policy perturbation and multiple start strategies.
Keywords/Search Tags:Supply chain, Information, Markov decision process, Restricted, Value, Policy
Related items