Classification and object recognition is a fundamental problem in many robotics and aerospace applications, such as autonomous driving, vision-based navigation, and search & rescue.
The field has advanced much in recent years with the introduction of deep-learning-based approaches, yet reliable classification remains a significant problem. Classification results may be
affected by varying viewpoints, changing lighting conditions, occlusions, localization uncertainty, and limited by the classifier’s training set. In this work, we propose several sequential classification approaches that deal with some of these uncertainties within a semantic simultaneous localization and mapping (SLAM) framework.
First, we propose using a viewpoint-dependent classifier model, which uses the coupling between object class and pose to assist in addressing classification and perceptual aliasing. We do so by maintaining a hybrid belief over continuous and discrete random variables. One robot may prove insufficient in classifying the objects within the environment, so we propose a formulation that uses the viewpoint-dependent model within a distributed multi-robot setting, while keeping the estimation consistent for both continuous and discrete random variables. Furthermore,
the classifier’s training set is limited, and during deployment the robot may encounter scenarios in which it was not trained on, inducing epistemic uncertainty. We propose a sequential classification approach that accounts for posterior epistemic uncertainty from a sequence of images. Eventually, we incorporate posterior epistemic uncertainty within a belief space planning (BSP) framework, considering in particular autonomous classification and active semantic SLAM. We study our approaches in simulation and using real data.