Font Size: a A A

Collapsibility Of Chain Graphs

Posted on:2016-08-21Degree:MasterType:Thesis
Country:ChinaCandidate:Q XiFull Text:PDF
GTID:2180330470951421Subject:Statistics
Abstract/Summary:PDF Full Text Request
Graphical models are diffusely used to show and analyze causal relationships and conditional independencies among random variables. There are two most fa-mous kinds of graphical models, Markov networks (undirected graph) and Bayesian networks (directed acyclic graph). Wermuth and Lauritzen (1990) introduced a wider kind of block-recursive graphical models(chain graph models), which contain the above two classes. Chain graph models are most suitable when there are both symmetric association and response explanatory relations among random variables, while undirected graph models (UG models) mainly deal with the former and di-rected acyclic graph models (DAG models) concentrate on the later. With the development of versatility, recently, more and more chain graph models has turned in statistical applications as a modeling tool. For instance,[11] settled a financial case study on credit scoring by constructing a chain graph model,[3] utilized chain graphs to categorize proteins in bioinformatics, furthermore,[21] employed them to predict the structures of protein. However, when we come across hypothesis test-ing, model selection strategy as well as data reduction of big data, collapsibility on chain graph demonstrate how important itself is.Collapsibility implies that we can get the same inference result after we marginal-ize some variables. Obviously, by collapsing a large set of variables onto a small subset, the reduction of variables enables the efficiency of statistical analysis to be improved. Thus, we can interpret the result of statistical analysis more visually and more compactly. However, in general, collapsing over some variables may come across different or even opposite conclusions. This phenomenon is famous as the Yule-Simpson paradox [14,22]. Based on different conditions, the collapsibility of chain graph models can be divided into conditional independence collapsibility, estimate collapsibility as well as model collapsibility. Conditional independence col-lapsibility means that the set of conditional independences that are induced by the global graph is the same as the set that are from the induced subgraph, that is:Estimate collapsibility means that P(xv\{a}), which is the marginalization of MLE P(x) in the chain graph model (gv, F), is the same as the MLE the induced chain graph model (That is:Model collapsibility means that the set of compatible probability distributions that are induced by the global graph is the same as the set that are from the induced subgraph, that is:Estimate collapsibility requires the values of two MLEs to be exactly the same, however model collapsibility only needs them to be asymptotically the same.In this paper, we mainly discuss estimate collapsibility and conditional inde-pendence collapsibility for CG models. However, unlike the case of DAG models, the collapsibility of CG models is more complicated. This is because of the exis-tence of undirected and directed edges as well as the more general Markov properties of chain graphs. In part2, we give the conceptions of estimate collapsibility and c-removable in CG models. We give several conclusions to clarify the relations between them. In part3, we introduce another collapsibility of conditional in-dependence collapsibility and the corresponding removability in CGs. We simply introduce model collapsibility for CG models in part4.
Keywords/Search Tags:Causal Inference, Chain Graphical Models, Collapsibility, Re-movability
PDF Full Text Request
Related items