Font Size: a A A

Approximation of stochastic processes by hidden Markov models

Posted on:1993-04-20Degree:Ph.DType:Thesis
University:Brown UniversityCandidate:Kehagias, AthanasiosFull Text:PDF
GTID:2478390014996982Subject:Mathematics
Abstract/Summary:
In this thesis we restrict ourselves to stationary and discrete valued stochastic processes.; A pair of stochastic process (X,Y) is a Hidden Markov Model (HMM) if X (the state process) is a Markov process and Y (the observable process) is an incomplete observation of X. The observation can be deterministic or noisy and the observable can be a state or a state transition. Hence we have four possible types of HMM's.; First we establish that all types of HMM's are equivalent, in the sense that, given any HMM of arbitrary type we can construct a HMM of any other arbitrary type, such that the two models have identical observable processes. Therefore all types of HMM's have the same modelling power.; Second, we consider the problem of Representation: what kind of stochastic processes can we approximate with Hidden Markov Models? To make the question meaningful we define two types of stochastic process approximation: (a) weak approximation, based on the weak convergence of probability measures and (b) cross entropy approximation, based on the Kullback-Leibler informational divergence. Then we prove that there is a sequence of HMM's (of increasing size) that approximate any ergodic stochastic process in the weak and cross entropy sense.; Third, we consider the problem of Consistent Estimation. To approximate an ergodic process we need a sequence of HMM's of increasing size. For a fixed size Hidden Markov Model we can use the very efficient Baum algorithm to find the Maximum Likelihood parameters estimate. But will the sequence of estimates be consistent (i.e. will it converge to the true process)? The answer is: the sequence of Maximum Likelihood Estimates will be consistent if the original process is ergodic, has strictly positive probability and conditional probability bounded away from zero.; Fourth, we develop HMM models of the raw speech signal and demonstrate numerically consistency of Maximum Likelihood estimation.; Finally, we develop Hidden Gibbs Models, an analogue of HMM, and use these to model one dimensional speech signals and two dimensional images.
Keywords/Search Tags:Process, Hidden, Models, HMM, Approximation
Related items