“非线性系统最优估计”的版本间的差异

来自ENPG_Wiki
第387行: 第387行:
\end{align}
\end{align}
\]
\]
考虑$$ w(\vec{x}_1^k)=\frac{p[\vec{x}_1^k|\vec{z}_1^n]}{\pi[\vec{x}_1^k|\vec{z}_1^n]}$$,则有


\[
\[
\begin{align}
\begin{align}
w(\vec{x}_k) \equiv \frac{p[\vec{x}_k,\vec{z}_1^k]}{\pi[\vec{x}_k|\vec{z}_1^k]}  
w(\vec{x}_1^k) &=\frac{p[\vec{x}_1^k|\vec{z}_1^n]}{\pi[\vec{x}_1^k|\vec{z}_1^n]}
=  \frac{p[\vec{z}_1^k|\vec{x}_k]p[\vec{x}_k]}{\pi[\vec{x}_k|\vec{z}_1^k]}
=  \frac{p[\vec{x}_1^k|\vec{z}_1^n]}  {\pi[\vec{x}_k|\vec{x}_1^{k-1},\vec{z}_1^n] \pi[\vec{x}_1^{k-1}|\vec{z}_1^n]} \\
= \frac{p[\vec{x}_k|\vec{z}_1^k]p[\vec{z}_1^k]}{\pi[\vec{x}_k|\vec{z}_1^k]}
 
& = \frac{p[\vec{x}_1^k|\vec{z}_1^n] p[\vec{x}_1^{k-1}|\vec{z}_1^n]}  {\pi[\vec{x}_k|\vec{x}_1^{k-1},\vec{z}_1^n] p[\vec{x}_1^{k-1}|\vec{z}_1^n] \pi[\vec{x}_1^{k-1}|\vec{z}_1^n]} \\
& = w(\vec{x}_1^{k-1}) \frac{p[\vec{x}_1^k|\vec{z}_1^n] } {\pi[\vec{x}_k|\vec{x}_1^{k-1},\vec{z}_1^n] p[\vec{x}_1^{k-1}|\vec{z}_1^n] ]}
 
\end{align}
\end{align}
\]
\]


==== 重采样(Resampling) ====
==== 重采样(Resampling) ====

2021年10月19日 (二) 01:44的版本

为什么线性系统重要,原因很简单,因为我们可以解出它们——费曼

线性系统的最优估计可以通过卡尔曼滤波来快速精确处理(它是递推的),而非线性系统就难很多。

对于非线性系统,我们先分别就离散系统和连续系统介绍其最优估计的精确解(即贝叶斯估计,但这一般是没法直接实际使用的),然后介绍扩展卡尔曼滤波、Unscented卡尔曼滤波和粒子滤波这三种实用的处理方法

离散—非线性系统最优估计

问题表述

状态方程和观测方程

假设离散非线性系统的状态方程和观测方程为

\[ \begin{align} \vec{x}_{k+1} & = \vec{f}(\vec{x}_{k},\vec{u}_k,k) + \vec{w}_{k+1} \\ \vec{z}_k & = \vec{h}(\vec{x}_k,k) + \vec{v}_k \end{align} \]


其中,$$\vec{x} _ k$$为$$n$$维状态向量,$$\vec{u}_k$$为$$r$$维控制向量,$$z_k$$为$$m$$维观测向量。$$\vec{w}_k,\vec{w}_k,\vec{x}_0$$为独立的随机变量。$$\vec{w}_k$$和$$\vec{v}_k$$ 分别为$$n$$,$$m$$维噪声(一般考虑是零均值高斯白噪声),其概率密度分别为$$P_w(\vec{w}_k)$$和$$P_v(\vec{v}_k)$$。初始状态$$\vec{x}_0$$的概率密度函数为$$P_x(\vec{x}_k)$$。

假设在$$k$$时刻已经获得实时信息$$\mathrm{Z}^*(k)=\{\mathrm{Z}_1^k,\mathrm{U}_1^{k-1}\}$$,其中$$\mathrm{Z}_1^k=\{\vec{z}_1,\cdots,\vec{z}_k\}, \mathrm{U}_1^k=\{\vec{u}_1,\cdots,\vec{u}_k\}$$

贝叶斯估计

贝叶斯估计,即在给定$$k$$时刻实时信息$$\mathrm{Z}^*(k)$$的情况下,求出状态$$\vec{x}_k$$的条件概率密度 \[ p(\vec{x}|\mathrm{Z}^*(k)) \]

$$\vec{x}_k$$的条件期望也就是在实时信息下$$\vec{x}_k$$的最小方差估计,即 $$\newcommand{\d}{\mathrm{d}}$$ \[ \hat{\vec{x}}_{k|k} = E[\vec{x}_k|\mathrm{Z}^*(k)] = \int_{-\infty}^\infty \vec{x}_k p(\vec{x}|\mathrm{Z}^*(k)) \d\vec{x}_k \] 而估计误差$$ \tilde{\vec{x}}_k \equiv \vec{x}_k - \hat{\vec{x}}_{k|k} $$ 的条件协方差矩阵为

\[ \begin{align} \mathrm{P}_{k|k} = Cov[\tilde{\vec{x}}_k,\tilde{\vec{x}}_k|\mathrm{Z}^*(k)] = \int_{-\infty}^\infty (\vec{x}_k - \hat{\vec{x}}_{k|k})(\vec{x}_k - \hat{\vec{x}}_{k|k})^T p(\vec{x}|\mathrm{Z}^*(k)) \d\vec{x}_k \end{align} \]

问题归结于,如何递推地求出条件概率密度$$p(\vec{x}|\mathrm{Z}^*(k))$$


求解

首先根据状态方程和测量方程有:

\[ \begin{align} &p(\vec{x}_{k+1}|\vec{x}_{k},\vec{u}_{k}) = p_w(\vec{x}_{k+1} - \vec{f}(k,\vec{x}_k,\vec{u}_k)) \\ &p(\vec{z}_{k+1}|\vec{x}_{k+1}) = p_v(\vec{z}_{k+1} - \vec{h}(k+1,\vec{x}_{k+1})) \end{align} \] 当进行到第$$k$$步,我们有了$$ \mathrm{Z}^{*}(k) $$和$$ \vec{u}_{k} $$后,可以得到$$\vec{x}_{k+1},\vec{z}_{k+1}$$的条件概率(预测): \[ \begin{align} &p(\vec{x}_{k+1}|\mathrm{Z}^{*}(k),\vec{u}_{k}) = \int_{-\infty}^\infty p_w(\vec{x}_{k+1} - \vec{f}(k,\vec{x}_k,\vec{u}_k)) p(\vec{x}_{k}|\mathrm{Z}^{*}(k)) \d\vec{x}_{k} \\ &p(\vec{z}_{k+1}|\mathrm{Z}^{*}(k),\vec{u}_k) = \int_{-\infty}^\infty p_v(\vec{z}_{k+1}-\vec{h}(k+1,\vec{x}_{k+1}))p(\vec{x}_{k+1}|\mathrm{Z}^{*}(k),\vec{u}_{k}) \d\vec{x}_{k+1} \end{align} \]

在进一步得到测量值$$\vec{z}_{k+1}$$后,可以得到$$\vec{x}_{k+1}$$的后验概率: \[ \begin{align} \label{eq:BayesFilter} { \begin{aligned} p(\vec{x}_{k+1}|\mathrm{Z}^{*}(k+1)) &= \frac{p(\vec{x}_{k+1},\vec{z}_{k+1}|\mathrm{Z}^{*}(k),\vec{u}_k)} {p(\vec{z}_{k+1}|\mathrm{Z}^{*}(k),\vec{u}_k)} \\ &=\frac{p_v(\vec{z}_{k+1} - \vec{h}(k+1,\vec{x}_{k+1})) p(\vec{x}_{k+1}|\mathrm{Z}^{*}(k),\vec{u}_{k})} {\int_{-\infty}^\infty p_v(\vec{z}_{k+1}-\vec{h}(k+1,\vec{x}_{k+1}))p(\vec{x}_{k+1}|\mathrm{Z}^{*}(k),\vec{u}_{k}) \d\vec{x}_{k+1} } \end{aligned} } \end{align} \]

上式被称为贝叶斯滤波公式,虽然是直观精确的表达式,但其中每一步的积分都是非常困难的,很可能得不到解析表达式,因而没有重要的应用价值。

连续—非线性系统最优估计

假设系统的状态方程和观测方程为: \[ \begin{align} & \dot{\vec{x}}(t) = \vec{f}(\vec{x}(t),t)+\mathrm{g}(\vec{x}(t),t) \vec{w}(t) \\ & \vec{z}(t) = \vec{h}(\vec{x}(t),t) + \vec{v}(t) \end{align} \] 其中$$\vec{w}(t)$$和$$\vec{v}(t)$$均为零均值高斯白噪声,且与初始状态$$\vec{x}(t_0)$$不相关,即

\[ \begin{align} & E[\vec{w}(t)] = E[\vec{v}(t)] = 0 \\ & \Cov[\vec{w}(t),\vec{w}(\tau)] = \mathrm{Q}(t)\delta(t-\tau)\\ & \Cov[\vec{v}(t),\vec{v}(\tau)] = \mathrm{R}(t)\delta(t-\tau) \\ & \Cov[\vec{w}(t),\vec{v}(\tau)] = \Cov[\vec{w}(t),\vec{x}(t_0)] = \Cov[\vec{v}(t),\vec{x}(t_0)] = 0 \end{align} \]

状态方程和观测方程也可以改写微分形式: \[ \begin{align} \label{dynamic} & \d\vec{x}(t) = \vec{f}(\vec{x}(t),t)\d t+\mathrm{g}(\vec{x}(t),t) \d \vec{N}(t) \\ \label{measure} & \d\vec{y}(t) = \vec{h}(\vec{x}(t),t)\d t + \d\vec{M}(t) \end{align} \]

其中$$\vec{N}(t),\vec{M}(t),\vec{y}(t)$$满足: \[ \begin{align} & E[\d\vec{M}(t)] = 0, \Var[\d\vec{M}(t)] = \vec{R}(t)\d t \\ & E[\d\vec{N}(t)] = 0, \Var[\d\vec{N}(t)] = \vec{Q}(t)\d t \\ & \d\vec{y}(t) = \vec{z}(t)\d t \end{align} \]

其中\ref{dynamic}是有名的随机微分方程,解为Fokker-Planck方程,于是

\[ \begin{align} p&(\vec{x},t+\d t|\mathrm{Y}^*(t+\d t))- p(\vec{x},t|\mathrm{Y}^*(t)) \\ =& p(\vec{x},t+\d t|\mathrm{Y}^*(t+\d t))-p(\vec{x},t+\d t|\mathrm{Y}^*(t))+ p(\vec{x},t+\d t|\mathrm{Y}^*(t))- p(\vec{x},t|\mathrm{Y}^*(t)) \\ \label{FP_term} =&{\color{blue}{ - \sum_i \frac{\partial}{\partial x_i} [f_i(\vec{x},t)p(\vec{x},t|\mathrm{Y}^*(t))] \d t + \frac{1}{2} \sum_{ij} \frac{\partial^2}{\partial x_i\partial x_j}\{\mathrm{g(\vec{x}(t),t)}\mathrm{Q(t)}\mathrm{g(\vec{x}(t),t)}^T p(\vec{x},t|\mathrm{Y}^*(t)) \} \d t }} \\ \label{Bayes_term} & \color{green}{+ (\vec{h}(\vec{x},t) - E(\vec{h}(\vec{x},t | \mathrm{Y}^*(t)) )^T \mathrm{R}^{-1}(\d\vec{y}(t) - E(\vec{h}(\vec{x},t | \mathrm{Y}^*(t))\d t ) p(\vec{x},t|\mathrm{Y}^*(t)) \} \d t } \end{align} \]

其中\ref{FP_term} 项就对应Fokker-Planck方程,\ref{Bayes_term}项是新观测的信息$$ \d\vec{y}(t) $$带来的,可通过贝叶斯条件概率公式取$$ \d t\to0 $$得到,这一项的证明见(Kushner H J,1964)

如果能通过上式求得$$p(\vec{x}(t),t|\mathrm{Y}^*(t))$$,则可以得到状态$$\vec{x}(t)$$的最小方差估计也就是关于$$\mathrm{Y}^*(t)$$的条件均值。

扩展卡尔曼滤波(EKF)

连续-离散型系统

实际问题中得到的系统状态模型往往是连续的,而测量方差是离散的,即 \[ \begin{align} \dot{\vec{x}}(t) = \vec{f}(\vec{x}(t),t) + \vec{w}(t) \\ \vec{z}_k = \vec{h}_k(\vec{x}(t_k)) + \vec{v}_k \end{align} \]

由于方程的非线性,导致概率密度函数非常复杂,计算$$p(x,t)$$往往是不现实的,为了得到实用的估计算法,应该采用绕过$$p(x,t)$$来计算$$\vec{x}(t)$$的条件均值和误差协方差矩阵的方法。扩展卡尔曼滤波就是将$$\vec{f},\vec{h}$$在适当位置进行泰勒展开的方法。

先对$$ \vec{f}(\vec{x}(t),t) $$在当前条件均值$$\hat{\vec{x}}(t)$$处泰勒展开: \[ \begin{align} \vec{f}(\vec{x},t) = \vec{f}(\hat{\vec{x}},t) +\frac{\partial f}{\partial x}\bigg|_{\hat{\vec{x}}}(\vec{x} - \hat{\vec{x}})+ \cdots \end{align} \] 记 $$ \mathrm{F}(\hat{\vec{x}},t) = \frac{\partial \vec{f}(\vec{x},t)}{\partial \vec{x}(t)}\bigg|_{\hat{\vec{x}}}$$,其元素$$ \mathrm{F}(\hat{\vec{x}},t)_{ij} = \frac{\partial f_i(\vec{x},t)}{\partial x_j(t)}\bigg|_{\hat{\vec{x}}}$$

扩展卡尔曼滤波一般将泰勒展开保留至一阶(保留更改多项可推导出更高阶更精确的滤波器,如二阶滤波器),于是对$$t\in[t_{k-1},t_k)$$,也就是两次测量间的动力学阶段,有

\[ \begin{align} \dot{\hat{\vec{x}}}(t) &= \vec{f}(\hat{\vec{x}},t),t\in[t_{k-1},t_k) \\ \dot{\mathrm{P}}(t) &= E[\hat{\vec{x}}(t)-\vec{x}(t)][\hat{\vec{x}}(t)-\vec{x}(t)]^T\\ &= \mathrm{F}(\hat{\vec{x}},t)\mathrm{P}(t) + \mathrm{P}(t)\mathrm{F}^T(\hat{\vec{x}},t) + \mathrm{Q}(t), t\in[t_{k-1},t_k) \end{align} \]

下面建立测量后的修正方程,定义估计$$\hat{\vec{x}}_{k|m}$$,要求估计为线性无偏估计,则有 $$\hat{\vec{x}}_{k|k} = \hat{\vec{x}}_{k|k-1} + \mathrm{K}_{k}[\vec{z}_k - \hat{\vec{h}}_k(\vec{x}_k)] $$, 其中$$\mathrm{K}_k$$为增益矩阵,我们可以要求$$\mathrm{K}_k$$使得估计误差协方差矩阵$$\mathrm{P}_{k|k}$$最小。 先定义 \[ \begin{align} \mathrm{P}_{k|k} &= E[\tilde{\vec{x}}_{k|k}\tilde{\vec{x}}_{k|k}^T]\\ \mathrm{P}_{k|k} &= E[\tilde{\vec{x}}_{k|k-1}\tilde{\vec{x}}_{k|k-1}^T]\\ \mathrm{R}_k &= E[\vec{v}_k\vec{v}_k^T] \end{align} \]

可得 \[ \begin{align} \mathrm{P}_{k|k} = & \mathrm{P}_{k|k-1} + \mathrm{K}_k E\{[\vec{h}_k(\vec{x}_k)-\hat{\vec{h}}_k(\vec{x}_k)][\vec{h}_k(\vec{x}_k)-\hat{\vec{h}}_k(\vec{x}_k)]^T\}\mathrm{K}_k^T\\ & + E\{\tilde{x}_{k|k-1}[\vec{h}_k(\vec{x}_k)-\hat{\vec{h}}_k(\vec{x}_k)]^T\}\mathrm{K}_k^T \\ & + \mathrm{K}_k E\{[\vec{h}_k(\vec{x}_k)-\hat{\vec{h}}_k(\vec{x}_k)]\tilde{x}_{k|k-1}^T\} \\ & + \mathrm{K}_k \mathrm{R}_k \mathrm{K}_k^T \end{align} \]

令$$\mathrm{K}_k$$使得$$Tr[\mathrm{P}_{k|k}]$$最小,即 $$ \frac{\partial Tr[\mathrm{P}_{k|k}] }{\partial \mathrm{K}_k } =0 $$ , 得:

\[ \begin{align} \mathrm{K}_k = & -E\{\tilde{x}_{k|k-1}[\vec{h}_k(\vec{x}_k)-\hat{\vec{h}}_k(\vec{x}_k)]^T\}\\ & \times \{E[[\vec{h}_k(\vec{x}_k)-\hat{\vec{h}}_k(\vec{x}_k)][\vec{h}_k(\vec{x}_k)-\hat{\vec{h}}_k(\vec{x}_k)]^T] + \mathrm{R}_k \}^{-1} \end{align} \]


注: \[ \begin{align} &\frac{\partial }{\partial \mathrm{A}} Tr(\mathrm{AB}) = \mathrm{B}^T \\ &\frac{\partial }{\partial \mathrm{A}} Tr(\mathrm{BA^T}) = \mathrm{B} \\ &\frac{\partial }{\partial \mathrm{A}} Tr(\mathrm{ABA^T}) = 2\mathrm{AB} \end{align} \]


于是 \[ \begin{align} \mathrm{P}_{k|k} = & \mathrm{P}_{k|k-1} + \mathrm{K}_k E\{[\vec{h}_k(\vec{x}_k)-\hat{\vec{h}}_k(\vec{x}_k)]\tilde{x}_{k|k-1}^T\}\ \end{align} \]


上面的几个式子就构成了新测量量的修正算法。但由于计算$$\hat{\vec{h}}_k(\vec{x}_k)$$要依赖$$\vec{x}_k$$的概率密度函数,所以上面的式子是不实用的。我们可以将$$\vec{h}_k(\vec{x}_k)$$在$$\hat{\vec{x}}_{k|k-1}$$处泰勒展开,即: \[ \begin{align} \vec{h}_k(\vec{x}_k) = \vec{h}_k(\hat{\vec{x}}_{k|k-1}) +\frac{\partial \vec{h}_k(\vec{x})}{\partial \vec{x}}\bigg|_{\hat{\vec{x}}_{k|k-1}}(\vec{x}_k - \hat{\vec{x}}_{k|k-1})+ \cdots \end{align} \] 记 \[ \begin{align} \mathrm{H}_k(\hat{\vec{x}}_{k|k-1}) \equiv \frac{\partial \vec{h}_k(\vec{x})}{\partial \vec{x}}\bigg|_{\hat{\vec{x}}_{k|k-1}} \end{align} \]

取一阶近似,则有扩展卡尔曼滤波的修正方程:

\[ \begin{align} \hat{\vec{x}}_{k|k} & = \hat{\vec{x}}_{k|k-1} + \mathrm{K}_{k}[\vec{z}_k - \vec{h}_k(\hat{\vec{x}}_{k|k-1})] \\ \mathrm{K}_k &= \mathrm{P}_{k|k-1} \mathrm{H}_k^T(\hat{\vec{x}}_{k|k-1}) [\mathrm{H}_k(\hat{\vec{x}}_{k|k-1}) \mathrm{P}_{k|k-1} \mathrm{H}_k^T(\hat{\vec{x}}_{k|k-1}) +\mathrm{R}_k ]^T \\ \mathrm{P}_{k|k} &= [\mathrm{I} - \mathrm{K}_k \mathrm{H}_k(\hat{\vec{x}}_{k|k-1}) ]\mathrm{P}_{k|k-1} \end{align} \]


Unscented卡尔曼滤波(UKF)

简介

Unscented卡尔曼滤波假定状态满足高斯分布,用一组采样点来近似非线性函数的均值和方差,称为Unscented变换技术。

Unscented卡尔曼滤波是基于离散系统提出来的,考虑如下状态方程和测量方程:

\[ \begin{align} \vec{x} (k+1) &= \vec{f}[\vec{x}(k)] + w(k) \\ \vec{z} (k) &= \vec{h}[\vec{x}(k)] + v(k) \end{align} \]

算法流程

算法大致流程为:

  1. 首先设定初值$$\hat{\vec{x}}(0),\mathrm{P}(0)$$, 然后对$$k>1$$ ,依次进行下面步骤:
  2. 在$$\hat{\vec{x}}(k-1)$$周围构造$$2n+1$$个Sigma点及相应权重:\[ \begin{align} \vec{\chi}(k-1) &= \{ \hat{\vec{x}}(k-1),\hat{\vec{x}}(k-1)+[\alpha\sqrt{(n+\kappa)\mathrm{P}_x(k-1)}]_i, \hat{\vec{x}}(k-1)-[\alpha\sqrt{(n+\kappa)\mathrm{P}_x(k-1)}]_i \}(i=1,2,\cdots,n) \\ W_i^{(m)} &= W_i^{(c)} = \left\{ \begin{aligned}&1-n/(\alpha^2(n+\kappa)),i=0\\ &1/[2\alpha^2(n+\kappa)],i=1,\cdots,2n \end{aligned} \right. \end{align} \]其中$$\alpha\in [0,1]$$为尺度因子,$$\kappa$$为可调参数,对于高斯分布,当状态变量为多变量时$$\kappa=3-n$$,多单变量时$$\kappa=2$$;$$[\alpha\sqrt{(n+\kappa)\mathrm{P}_x(k-1)}]_i$$表示矩阵$$\alpha\sqrt{(n+\kappa)\mathrm{P}_x(k-1)}$$的第$$i$$列
  3. 计算预测Sigma点\[ \begin{align} \vec{\chi}_i^-(k) = \vec{f}[\vec{\chi}_i(k-1)](i=0,1,2,\cdots,2n) \end{align} \]
  4. 计算Sigma点的均值和方差\[ \begin{align} &\hat{\vec{x}}^-(k) = \sum_{i=0}^{2n} W_i^{(m)} \vec{\chi}_i^-(k) \\ &\mathrm{P}_x^-(k) = \sum_{i=0}^{2n} W_i^{(c)} [\vec{\chi}_i^-(k) - \hat{\vec{x}}^-(k)][\vec{\chi}_i^-(k) - \hat{\vec{x}}^-(k)]^T + \mathrm{Q}(k) \end{align} \]
  5. 测量更新\[ \begin{align} &\hat{\vec{x}}(k) = \hat{\vec{x}}^-(k) + \mathrm{K}[\vec{z}(k)- \hat{\vec{z}}^-(k)] \\ &\mathrm{P}_x(k) = \mathrm{P}_x^-(k) -\mathrm{KP}_z(k)\mathrm{K^T} \\ &\mathrm{K} = \mathrm{P}_{xz}(k)\mathrm{P}_{z}^{-1}(k) \end{align} \]其中\[ \begin{align} &\hat{\vec{z}}^-(k) = \sum_{i=0}^{2n} W_i^{(m)}\vec{h}[\vec{\chi}_i^-(k)] \\ &\mathrm{P}_z(k) = \sum_{i=0}^{2n} W_i^{(c)}\{\vec{h}[\vec{\chi}_i^-(k)] - \hat{\vec{z}}^-(k) \} \{\vec{h}[\vec{\chi}_i^-(k)] - \hat{\vec{z}}^-(k) \}^T + \mathrm{R}(k) \\ &\mathrm{P}_{xz}(k) = \sum_{i=0}^{2n} W_i^{(c)}\{\vec{\chi}_i^-(k) - \hat{\vec{x}}^-(k) \} \{\vec{h}[\vec{\chi}_i^-(k)] - \hat{\vec{z}}^-(k) \}^T \end{align} \]


综上,UKF与EKF相比,有如下优点:

  1. 无需计算非线性函数的导数矩阵
  2. 可以处理不可导的非线性函数
  3. 计算量与EKF相当
  4. 对高斯输入非线性函数的情况,UKF的均值精确到三阶,方差精确到二阶

粒子滤波

贝叶斯估计需要计算整个状态空间的积分,对于非线性系统几乎不可行,EKF虽然简单易行,但当系统非线性较强时,估计精度会降低,UKF虽然在精度和鲁棒性上比EKF高,但同样需要满足状态高斯分布的假设。在实际应用中,很多系统是强非线性的,且状态不一定满足高斯分布,即非线性非高斯的情况。对于这种情况,基于蒙特卡洛采样方法和贝叶斯最优估计理论提出了粒子滤波方法(Particle Filtering,PF),利用一组加权的随机样本(粒子)逼近状态的后验概率密度,达到最优贝叶斯估计的效果,能够处理任意非线性非高斯系统的估计问题。


下面从贝叶斯重要性采样开始,依次引入序贯重要性采样方法重采样,最后引出样本重要性重采样方法

贝叶斯重要性采样

贝叶斯重要性采样是蒙特卡洛方法中一种常用得采样技术,其原理是:在采样过程中,如果目标得概率分布$$p(t)$$不可知或难以采样,则可以从一个与目标概率分布相似且容易采样得概率分布$$\pi(t)$$中进行采样

考虑如下一般非线性系统 \[ \begin{align} \vec{x}_{k+1} &= \vec{f}(\vec{x}_k,\vec{w}_k) \\ \vec{z}_k &= \vec{h}(\vec{x}_k,\vec{v}_k) \end{align} \]

设$$\vec{x}_1^k=[\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_k]$$为$$1\sim k$$时刻所有状态向量的集合,$$\vec{z}_1^k=[\vec{z}_1,\vec{z}_2,\cdots,\vec{z}_k]$$为$$1\sim k$$时刻所有观测向量的集合,在得到$$\vec{z}_1^k$$后对$$\vec{x}_{k+1}$$的最优估计为其条件期望

\[ \begin{align} E[\vec{f}(\vec{x}_k)|\vec{z}_1^k] = \int\vec{f}(\vec{x}_k)p[\vec{x}_k|\vec{z}_1^k] \d \vec{x}_k \end{align} \]

一般而言,$$p[\vec{x}_k|\vec{z}_1^k]$$是非高斯多变量函数,很难直接采样,可以用与其近似的分布$$\pi[\vec{x}_k|\vec{z}_1^k]$$替代它进行采样,则

\[ \begin{align} E[\vec{f}(\vec{x}_k)|\vec{z}_1^k] &= \int\vec{f}(\vec{x}_k)p[\vec{x}_k|\vec{z}_1^k] \d \vec{x}_k \\ &= \int\vec{f}(\vec{x}_k)\frac{p[\vec{x}_k|\vec{z}_1^k]}{\pi[\vec{x}_k|\vec{z}_1^k]} \pi[\vec{x}_k|\vec{z}_1^k] \d \vec{x}_k \\ &= \frac{1}{p[\vec{z}_1^k]}\int\vec{f}(\vec{x}_k) \frac{p[\vec{x}_k,\vec{z}_1^k]}{\pi[\vec{x}_k|\vec{z}_1^k]} \pi[\vec{x}_k|\vec{z}_1^k] \d \vec{x}_k \\ &= \frac{1}{p[\vec{z}_1^k]}\int\vec{f}(\vec{x}_k) w(\vec{x}_k) \pi[\vec{x}_k|\vec{z}_1^k] \d \vec{x}_k \\ &= \frac{\int\vec{f}(\vec{x}_k)w(\vec{x}_k) \pi[\vec{x}_k|\vec{z}_1^k] \d \vec{x}_k } {\int w(\vec{x}_k) \pi[\vec{x}_k|\vec{z}_1^k] \d \vec{x}_k } \\ &= \frac{E_{\pi[\vec{x}_k|\vec{z}_1^k]}[\vec{f}(\vec{x}_k) w(\vec{x}_k)]} {E_{\pi[\vec{x}_k|\vec{z}_1^k]}[ w(\vec{x}_k)] } \\ \end{align} \] 其中$$w(\vec{x}_k)$$称为权重,为 \[ \begin{align} w(\vec{x}_k) \equiv \frac{p[\vec{x}_k,\vec{z}_1^k]}{\pi[\vec{x}_k|\vec{z}_1^k]} = \frac{p[\vec{z}_1^k|\vec{x}_k]p[\vec{x}_k]}{\pi[\vec{x}_k|\vec{z}_1^k]} = \frac{p[\vec{x}_k|\vec{z}_1^k]p[\vec{z}_1^k]}{\pi[\vec{x}_k|\vec{z}_1^k]} \end{align} \]

分母$$ E_{\pi[\vec{x}_k|\vec{z}_1^k]}[ w(\vec{x}_k)] $$可以视为对权重的归一化

序贯重要性采样方法(Sequential Importance Sampling Filter, SIS)

我们让重要性函数满足:

\[ \begin{align} p[\vec{x}_k,\vec{z}_1^k] = \end{align} \]


考虑$$ w(\vec{x}_1^k)=\frac{p[\vec{x}_1^k|\vec{z}_1^n]}{\pi[\vec{x}_1^k|\vec{z}_1^n]}$$,则有

\[ \begin{align} w(\vec{x}_1^k) &=\frac{p[\vec{x}_1^k|\vec{z}_1^n]}{\pi[\vec{x}_1^k|\vec{z}_1^n]} = \frac{p[\vec{x}_1^k|\vec{z}_1^n]} {\pi[\vec{x}_k|\vec{x}_1^{k-1},\vec{z}_1^n] \pi[\vec{x}_1^{k-1}|\vec{z}_1^n]} \\ & = \frac{p[\vec{x}_1^k|\vec{z}_1^n] p[\vec{x}_1^{k-1}|\vec{z}_1^n]} {\pi[\vec{x}_k|\vec{x}_1^{k-1},\vec{z}_1^n] p[\vec{x}_1^{k-1}|\vec{z}_1^n] \pi[\vec{x}_1^{k-1}|\vec{z}_1^n]} \\ & = w(\vec{x}_1^{k-1}) \frac{p[\vec{x}_1^k|\vec{z}_1^n] } {\pi[\vec{x}_k|\vec{x}_1^{k-1},\vec{z}_1^n] p[\vec{x}_1^{k-1}|\vec{z}_1^n] ]} \end{align} \]

重采样(Resampling)

样本重要性采样方法(Sampling Importance Resampling Filter, SIR)

算法流程