Font Size: a A A

Semi-parametric Inference For A Class Of Model Generated From FBSDE

Posted on:2010-12-02Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y X SuFull Text:PDF
GTID:1100360278474189Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
The theory of Backward Stochastic Differential Equations (BSDEs for short) has been considered with great interest in the last several years not only because of its connections with the non-linear partial differential equations and more generally the theory of non-linear semi-groups, but also stochastic control problems. At the same time, in mathematical finance, the theory of the hedging and pricing of a contingent claim is typically expressed in terms of a linear BSDE.The dynamic value of replicating portfolio Yt is given by a BSDE with a generator f, with Zt corresponding to the hedging portfolio. Particularly, when the generator function is connected with another stochastic process characterized by diffusion process, the equation is called Forward Backward Stochastic Differential Equation (short as FBSDE).Based on the form of FBSDE, we propose a class of model generated from FBSDE, which has the form as followswhere Xt is a stochastic differential equationWithout any confusion, we still call the function f as generator funcion. Additionally, the function Yt and Zt are assumed to be connected with Xt.Note that our model is different from the ordinary stochastic differential equation (short as OSDE). The drift term not only contains diffusion term but also is connected with some diffusion process, which is not shared by OSDE. In addition, our model has the same representation as BSDE except the terminal condition. This dissertation focuses on the semi-parametric inference for the model (1) generated from FBSDE. We consider the estimation and hypothesis test problem when the generator function is linear and that of under inequality constraints. For the estimation problem in our model, our estimation procedure is different from that of nonparametric estimation (see Yang and Yang(2006)([97]), Chen and Lin (2009)([18]) for details). When the generator function f has a parametric form, the resulting model is semi-parametric model. Though there is a nonparametric plug-in, the standard normality of the parametric estimator is obtained. Then, the corresponding hypothesis test problem is coming into existence. We construct the confidence regions for the coefficients of the linear generator function with two different tools: asymptotic normality and profile empirical likelihood(short as EL). Especially for the profile EL method, though there is a plug-in nonparametric estimator in the estimating equation, the empirical log-likelihood ratio statistic still tends to a standardχ2 variable in distribution. For model (1) with constraints on f, we try to get the statistical inference results under inequality restrictions. These obtained results represent extensions and improvements of existing results, some of which are thoroughly new advancements in the area of statistical inference for the related fields. This dissertation consists of four chapters, whose main contents are described as follows:In Chapter one, we give some introductions about the (Forward) Backward Stochastic Differential Equation and the proposed model. Additionally, the differences between our model and FBSDE, OSDE are illustrated, which mean that our results are meaningful and constructive. Some fundamental estimating methods applied in ordinary stochastic differential equations (OSDEs) are introduced, which include an overview of estimation on parametric, nonparametric and semi-parametric models and the optimal convergence rates of the estimators. Then, we give the definition of stationary and mixing process, which is necessary to the asymptotic behavior for our model. In Chapter two, we consider the following proposed model:where Xt is a simple Geometric Brownian motion satisfying with u andσthe unknown parameters.Our aim is to give the semi-parametric estimation and asymptotic property of the parameterβ= (c,μ) with nonparametric plug-in estimator (?)t. Both the nonparametric and parametric estimators are computationally feasible and the asymptotic properties are standard in the sense of normality. Although there is a plug-in nonparametric estimator in parametric estimation, the high order kernel, under-smoothing and bias correction are not required.Discretizing the model (2) with a sampling intervalΔtending to zero, given the initial calendar time point t0, we have the observationsthen form n pairs of synthetic datafrom which we can give the N-W kernel estimator of Z2(x0),Theorem 2.3.1 Let {XiΔ*,i = 0, ...,n - 1} be a sequence of observation on a stationaryρ-mixing Markov process with coefficient satisfyingρ(l) =ρl,0 <ρ< 1. Assume that {XiΔ*,i = 0, ...,n - 1} have a common and bounded density function p(x). For any given x0 in the interior of the support of p(·), p(x0) > 0, Z2(x0) > 0, p(·),Z(·) is second order continuous differentiable function in a neighborhood of x0. Under the condition (A2), then as n→∞, such that nh→∞, and nhΔ2→0,(a) The asymptotic bias of (?)2(x0) is given bythe asymptotic variance is(b) Assume further that nh5→0, then we have the asymptotic normality:whereμ2 =∫-11 u2K(u)du, v0 =∫-11K2(u)du.The theorem gives the bias, variance and asymptotic normality of the nonparametric estimator (?)t. Next is the asymptotic normality for the parametric estimator (?).Theorem 2.3.2 In addition to the condition of Theorem 2.3.1, under the condition (A1)-(A2) in Appendix and nΔ→∞as n→∞, we havewhere V = E[Zt2]Σ-1,Chapter three include two main contents. Firstly, we consider the confidence intervals based on the asymptotic normality. However, there contain several unknown statistics in estimated variance, which will slow down the convergence accuracy of confidence intervals. Then, we turn to another method, profile empirical likelihood method, which is free of parametric estimation. In both cases, we give the construction of confidence intervals, and compare the two methods in terms of coverage accuracy and average length of confidence intervals with simulations.The following two theorems are concerned with the confidence intervals based on asymptotic normality.Theorem 3.2.1 Assume that conditions (A0)-(A3) given in Appendix hold. Ifμ0 is the true value ofμthen as n→∞, nhΔ2→∞, nh5→0 andnΔ→∞, the pivotal quantity (?) has an asymptotic standardnormal distribution. That is,with P(U α) = 1-α, where U - N(0,1) andTheorem 3.2.2 Assume that conditions (A0)-(A3) given in Appendix hold. If c0 is the true value of c, then as n→∞, nhΔ2→∞, nh5→0, and nΔ→∞, the pivotal quantity (?) has an asymptotic standard normal distribution. That iswith P(U < cα) = 1 -α, where U -N(0,1) andTo be avoid of estimating unknown terms in estimated variance, we propose empirical likelihood introduced by Owen (1988) ([69]). Many advantages of the empirical likelihood over the the normal approximated-based method have been shown in the literature. In particular, it does not impose prior constraints on the shape of region; it does not require the construction of a pivotal quantity and the region is range preserving and transformation respecting; see for example Hall and La Scala (1990)([44]). Applying the classical procedure in EL, we obtain the following theorem for the parameterβ.Theorem 3.3.1 Assume that conditions (A0)-(A5) given in Appendix hold. Ifβ0 is the true value ofβ, then the empirical log-likelihood function (?)(β0) has an asymptoticχ22 distribution. That iswith P(χ22≤cα) = 1 -α, andRewrite the empirical log-likelihood function (?)(β) as (?)(c,μ), then we have the following two theorems based on the profile EL method.Theorem 3.3.2 Assume that conditions (A0)-(A5) given in Appendix hold, ifμ0 is the true value ofμ, then (?) has an asymptoticχ12 distribution. That iswith P(χ12≤cα) = 1-α, and (?) the optimal value minimizing (?)(c,μ0) for fixedμ0.Theorem 3.3.3 Assume that conditions (A0)-(A5) given in Appendix hold. if c0 is the true value of c, then T(c0) has an asymptoticχ12 distribution. That iswith P(χ12≤cα) = 1-α,, and (?) with (?) be the optimalμwhich minimizes (?)(c,μ). In Chapter four, we consider the statistical inference problem for our proposed model under inequality constraints, which include two kind of cases. One is that of the linear generator function with inequality constraint. The inequality constraint is based on the viability property of BSDE, which is denoted by Z(y,z)β≤Χ(y,z), here Z(y,z) andΧ(y,z) are functions with respect to y, z as stated in section 4.2. In the other case, we do not impose any functional form on the generator function, and the corresponding restriction is a nonlinear one.About the first case, we propose the estimation procedure for unknown Zt and the parameterβunder inequality constraints. The problem can be reduced to the quadratic program problem, therefore the parrelled results are satisfied.Theorem 4.2.1 If the prior belief on the inequality constraint is correct, there exists a sufficiently large samples n≥n0 such that the inequality constrained least squares (ICLS) estimator vector (?) on such large samples reduces to the unrestricted LS estimator vector (?).In addition, if we pretend that Zt is known, the theorem of likelihood ratio test statistic for the parameterβis satisfied.Theorem 4.2.2 Under the null hypothesis H0 : Z(y,z)β≤Χ(y,z), the distribution of LR, the likelihood ratio statistic, has the following property: for all c > 0,withω(·) the weights, Fm,n the F- distribution with freedom m,n, A =(?),U=(?).However, in our problem Zt is unobservable. We can choose an unbised estimator (?)t to replace Zt, from which we can obtain the similar results as Theorem 4.2.2. When the generator function has no specific form, the corresponding problem is that of under nonlinear inequality constraints. We combine inequality restraints with nonparametric regression to estimate the generator function within a local polynomial estimate procedure. We are able to implement the method in such a way that the locally polynomial estimator will always produce estimators satisfying the constraints, which is also possible with some of the other methods, but in our case turns out to require no modification to the estimator, only its application to some transformed data.Assume that the transformed data {mi}i=1n result from applying the constrained least squares algorithm to the original data (?). Then the locally linear estimator obtained from the transformed data and a log-concave kernel function satisfies the required constraints in sample.
Keywords/Search Tags:(Forward) backward stochastic differential equation, Nonparametric estimator, Semi-parametric estimator, Asymptotic normality, Hypothesis test, Profile empirical likelihood, Asymptotic x~2 distribution, Inequality constraints, Local linear estimator
PDF Full Text Request
Related items