We investigate the problem of autonomous navigation in unknown or uncertain environments, which is of interest in numerous robotics applications, such as navigation in GPS-deprived environments, mapping and 3D reconstruction, and target tracking. In lack of sources of absolute information (e.g. GPS), the robot has to infer its own state and to create a model of the environment based on sensor observations, a problem known as simultaneous localization and mapping (SLAM). Moreover, it has to plan actions, in order to accomplish given goals while relying on information provided by the inference (estimation) process. The inferred state, e.g. robot poses and 3D landmarks, cannot be assumed perfectly known because the observations and dynamics are stochastic; hence, planning future actions has to take into account different sources of uncertainty. The corresponding problem is known as belief space planning (BSP).
An essential ingredient in SLAM and BSP problems is correct association of landmarks observed by robot sensors (e.g. camera), as incorrect association might lead to wrong estimation and to catastrophic results. In particular, the ability to re-identify a previously observed object can be challenging, especially considering images taken airborne or on the ground with shallow viewpoints – an object may look completely different when observed from different angles. Yet, state of the art BSP approaches typically consider perfect ability to re-identify an object. In this work we develop a viewpoint aware BSP approach by modeling re-identification aspects within the planning phase. We study our approach in simulation, considering the problem of autonomously reaching a goal with highest estimation accuracy in a GPS-deprived unknown environment.