Font Size: a A A

The Random Allocation Rule Drop-the-loser Complementary In Nature

Posted on:2008-10-03Degree:MasterType:Thesis
Country:ChinaCandidate:Y Q HuFull Text:PDF
GTID:2190360215492179Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
In clinical trials, in order to compare the effects of two or more treatments, we ran-domly allocate each patient to one treatment. In view of improving the power of sta-tistical test and also allocating the patients to the better treatment, response-adaptiverandomization is often suggested. The advantage of this kind of randomization is thatwe can make good use of the data obtained thus far, which is useful in deciding whichtreatment should be assigned to the incoming subject. The most popular designs inresponse-adaptive randomization are urn models. However, it is often difficult to makestatistical inferences in response-adaptive randomization due to the dependency of thedata. Usually, advanced mathematical theories, like martingale limit theorems, shouldbe adopted. Another technique is embedding the discrete urn models into a family ofcontinuous Markov processes. In this way, we can deduce the properties of statistics inurn models from the theory of Markov processes.Ivanova (2003) proposed a new kind of urn model which is called the drop-the-loser rule. It has better properties than the play-the-winner rule. For the study of theformer, the main tool is embedding this urn model into a family of linear death pro-cesses with immigration. In this paper, we also want to use this tool, and by utilizingprobability generating functions, we try to get the central limit theorems for the max-imum likelihood estimator of pi. For treatment i, we first obtain the joint probabilitygenerating function for (Xi(t), Yi(t)), from which we can get the characteristic func-tion for t1/2(Xi(t)/t-api/qi, Yi(t)/t-a). When t goes to∞, this characteristic functionhas the form of a normal distribution. Since (?)i(t)=Xi(t)/Xi(t)+Yi(t) is function of(Xi(t)/t, Yi(t)/t), when normalized, (?)i(t) also tends to a normal distribution as t goes to∞. For the central limit theorem of (?)(t)=((?)1(t), ..., (?)K(t)), we can do it in a similarway by first obtaining the joint probability generating function of the correspondingstatistics. And the main task lies in verifying if the characteristic function of a randomvector tends to the one of a multi-normal vector.In order to make connections between the properties of statistics under continuoustime t and the ones of corresponding statistics under discrete urn model, Ivanova (2006)introduced the stopping time Tm, i.e., the moment when type 0 ball is drawn for the m th time. In this paper, we show that under this stopping time, the empirical estimator ofsuccess probability pi, (?)i(Tm)=Xi(Tm)/Xi(Tm)+Yi(Ym), is still the maximum likelihood estimatorof pi.Since the asymptotic variance of the allocation proportion is an important aspectof response-adaptive randomization, we compare these values of two randomizationrules, the drop-the-loser rule and one similar rule. By doing this, we can see why thedrop-the-loser rule is designed in a way that one ball of each type is added at the sametime after an immigration ball is drawn.
Keywords/Search Tags:response-adaptive randomization, drop-the-loser rule, urn, ball, immigration process, response, allocation, probability generating function
PDF Full Text Request
Related items