Belief space planning (BSP) and perception are fundamental problems in robotics and artificial intelligence, with applications including autonomous navigation and active SLAM. State-of-the-art BSP approaches assume that data association (DA), i.e. determining the correct correspondence between the observations and the landmarks, is given and perfect. However, real world environments are often ambiguous, which in the presence of different sources of uncertainty, make perception a challenging task. For example, an object might be similar in appearance from the current viewpoint to another object, while successfully matching images from two different but similar in appearance places (e.g. buildings that look alike) would incorrectly indicate the two places as one. An incorrect DA can lead to catastrophic results, e.g. a robot considering it is located in a wrong aliased corridor. Consequently, more advanced approaches, known as robust perception, are required. Yet, existing robust perception approaches focus on the passive case where robot actions are externally determined, while existing BSP methods assume data association to be given and perfect.
In this research we relax the above assumption and incorporate reasoning regarding DA aspects within BSP, while accounting for different sources of uncertainty (imperfect sensing, stochastic control, uncertain environment). We develop a data association aware belief space planning (DA-BSP) approach that explicitly reasons about DA within belief evolution while considering non-myopic planning and multi-modal beliefs represented by Gaussian Mixture Models (GMM). We envision such a framework to provide robust active perception and active disambiguation capabilities, in particular while operating in ambiguous and perceptually aliased environments. The approach is studied and proven effective using real-world experiments and synthetic simulations, carried out at the Autonomous Navigation and Perception Lab at the Technion.