Skip to content

Approximate Nonlinear Filtering: Consistency and Distributed Efficiency


Nonlinear stochastic filtering refers to problems in which a stochastic process, the so-called state, is partially observed by measuring another stochastic process, the observations, and the objective is to best estimate a function of the state, based on causal observations. Adopting the MMSE as the standard criterion, in most cases, the nonlinear filtering problem results in a dynamical system in the infinite dimensional space of measures, making the need for approximate solutions imperative.

We have studied the fundamental problem of globally approximating general MMSE optimal nonlinear filters, in discrete time. Under a potentially non-Markovian, conditionally Gaussian problem setting, I have shown convergence of appropriately defined approximate filters to the true optimal filter in a strong and well defined sense. In particular, convergence is compact in time and uniform in a completely characterized event of probability almost 1, providing a useful quantitative justification of Egorov’s Theorem, for the filtering problem at hand. The purpose of such result is to enable analysis of various approximate filtering techniques, under a common, canonical framework.

In particular, this research led to a new convergence analysis of grid based, recursive, approximate filters of Markov processes observed in conditionally Gaussian noise. More specifically, for grid based filters based on the so called marginal state quantization, we introduced the notion of conditional regularity of stochastic kernels, which, to the best of my knowledge, constitutes the most easily verifiable and relaxed condition proposed, under which strong asymptotic optimality of the respective grid based filters is guaranteed, in the sense briefly described above. To the best of my knowledge, no such results exist for competing global techniques for approximate filtering (for instance, particle filters), thus showing a potential theoretical advantage of the grid based approach.

Along the lines of the above works, several related studies have been pursued. Those include, in particular, a detailed stability analysis of distributed nonlinear state estimation in Gaussian-Finite hidden Markov models, implemented via the Alernating-Direction-Method-of-Multipliers (ADMM), a well known parallel optimization procedure in mathematical optimization.

Our main result shows that uniform stability of the distributed filtering process depends only loglinearly both on operation horizon and the size of the network, and only logarithmically on (the inverse of) the filtering consensus. If this total loglinear bound is fulfilled, any additional consensus iterations will incur a fully quantified further exponential decay in the consensus error. Our bounds are universal, in the sense that they are independent of the structure of the Gaussian Hidden Markov Model (HMM) under consideration.

Support:

  1. NSF Grant CNS-1239188 (pi: Dr. Athina Petropulu)
  2. NSF Grant CCF-1526908 (pi: Dr. Athina Petropulu, co-pi: Dr. Wade Trappe)

Selected Publications:

  1. D. S. Kalogerias and A. P. Petropulu, “Uniform ε-Stability of Distributed Nonlinear Filtering over DNAs: Gaussian-Finite HMMs,” IEEE Transactions on Signal & Information Processing over Networks, (Special Issue on Inference & Learning over Networks), vol. 2, no. 4, pp. 461 – 476, December 2016.
  2. D. S. Kalogerias and A. P. Petropulu, “Grid Based Nonlinear Filtering Revisited: Recursive Estimation & Asymptotic Optimality,” IEEE Transactions on Signal Processing, vol. 64, no. 16, pp. 4244 – 4259, July 2016.

Back to Research