Optimal Estimation in the Presence of Unknown Parameters

Abstract
An adaptive approach is presented for optimal estimation of a sampled stochastic process with finite-state unknown parameters. It is shown that, for processes with an implicit generalized Markov property, the optimal (conditional mean) state estimates can be formed from 1) a set of optimal estimates based on known parameters, and 2) a set of "learning" statistics which are recursively updated. The formulation thus provides a separation technique which simplifies the optimal solution of this class of nonlinear estimation problems. Examples of the separation technique are given for prediction of a non-Gaussian Markov process with unknown parameters and for filtering the state of a Gauss-Markov process with unknown parameters. General results are given on the convergence of optimal estimation systems operating in the presence of unknown parameters. Conditions are given under which a Bayes optimal (conditional mean) adaptive estimation system will converge in performance to an optimal system which is "told" the value of unknown parameters.

This publication has 6 references indexed in Scilit: