Abstract: The hidden Markov model (HMM) provides a powerful framework for inference in time-varying environments, where the underlying state evolves according to a Markov chain. To address the optimal filtering problem in general dynamic settings, we propose the $\alpha\beta$-HMM algorithm, which simplifies the state transition model to a Markov chain with equal exit probabilities and introduces a step-size parameter to balance the influence of observational data and the model. By analyzing the algorithm's dynamics in stationary environments, we uncover a fundamental trade-off between inference accuracy and adaptation capability, highlighting how key parameters and observation quality impact performance. A comprehensive theoretical analysis of the nonlinear dynamical system governing the evolution of the log-belief ratio, along with supporting numerical experiments, demonstrates that the proposed approach effectively balances adaptability and inference performance in dynamic environments.
Loading