Nonlinear stochastic filtering refers to problems in which a stochastic process, the so-called state, is partially observed by measuring another stochastic process, the observations, and the objective is to best estimate a function of the state, based on causal observations. Adopting the MMSE as the standard criterion, in most cases, the nonlinear filtering problem results in a dynamical system in the infinite dimensional space of measures, making the need for approximate solutions imperative.
We have studied the fundamental problem of globally approximating general MMSE optimal nonlinear filters, in discrete time. Under a potentially non-Markovian, conditionally Gaussian problem setting, I have shown convergence of appropriately defined approximate filters to the true optimal filter in a strong and well defined sense. In particular, convergence is compact in time and uniform in a completely characterized event of probability almost 1, providing a useful quantitative justification of Egorov’s Theorem, for the filtering problem at hand. The purpose of such result is to enable analysis of various approximate filtering techniques, under a common, canonical framework.
In particular, this research led to a new convergence analysis of grid based, recursive, approximate filters of Markov processes observed in conditionally Gaussian noise. More specifically, for grid based filters based on the so called marginal state quantization, we introduced the notion of conditional regularity of stochastic kernels, which, to the best of my knowledge, constitutes the most easily verifiable and relaxed condition proposed, under which strong asymptotic optimality of the respective grid based filters is guaranteed, in the sense briefly described above. To the best of my knowledge, no such results exist for competing global techniques for approximate filtering (for instance, particle filters), thus showing a potential theoretical advantage of the grid based approach.
Along the lines of the above works, several related studies have been pursued. Those include, in particular, a detailed stability analysis of distributed nonlinear state estimation in Gaussian-Finite hidden Markov models, implemented via the Alernating-Direction-Method-of-Multipliers (ADMM), a well known parallel optimization procedure in mathematical optimization.
Our main result shows that uniform stability of the distributed filtering process depends only loglinearly both on operation horizon and the size of the network, and only logarithmically on (the inverse of) the filtering consensus. If this total loglinear bound is fulfilled, any additional consensus iterations will incur a fully quantified further exponential decay in the consensus error. Our bounds are universal, in the sense that they are independent of the structure of the Gaussian Hidden Markov Model (HMM) under consideration.
- NSF Grant CNS-1239188 (pi: Dr. Athina Petropulu)
- NSF Grant CCF-1526908 (pi: Dr. Athina Petropulu, co-pi: Dr. Wade Trappe)
Back to Research