Derived based on an idealized, perfect information differential game, the acclaimed DGL1 guidance law was shown to guarantee hit-to-kill performance in the linearized, deterministic case, involving an interceptor possessing superior maneuverability and agility. In real life scenarios, where the perfect information assumption never holds, this advanced guidance law has to be augmented with a separately designed estimator that provides an estimate of the missing information. However, the inherent estimation error leads to erroneous decisions on the part of the guidance law, resulting in a severe interception performance degradation.
To alleviate this performance degradation, we use Bayesian decision theory to make optimal decisions on the game’s state, that properly take into account the inherent uncertainty due to the estimation error, on the one hand, while also considering the cost (final miss distance) associated with making various decisions on the game’s state, on the other hand. In turn, we modify the DGL1 law to use these optimal decisions, which results in a new, estimation-aware guidance law.
The implementation of the Bayesian decision rule requires the knowledge of the game’s posterior probability density function. To provide this density we employ an interacting multiple model particle filter, which is capable of dealing with nonlinear, non-Gaussian and even non-Markovian mode switching problems. The target acceleration command mode is modeled as a non-homogeneous Markov model, to cope with sophisticated targets that optimally time their evasion maneuvers. The performance of the new guidance law is demonstrated via an extensive Monte-Carlo simulation study, where it is compared with the classical DGL1.