Autonomous, Scalable and Robust Fusion for Collaborative Robotic Bayesian Inference
The idea of a robotic team cooperating on a joint task can be allegorized to a group of people working together. Often people have different capabilities, different knowledge, and different worldviews. However, when collaborating, people naturally know how to summarize only the relevant information to achieve a joint goal. For a team of robots that needs to work together, this human capability is not trivial. The robot’s ability to make sense and act in a constantly changing environment is much less effective than what the human brain does.
My goal is to enable teams of robots to collaborate in a robust, autonomous, and scalable manner on a variety of complementary tasks. Toward this goal I take a probabilistic approach to robotics, where a robot models the uncertainty in how it perceives the world using a probability distribution (pdf). In Bayesian decentralized data fusion (DDF) this approach is leveraged to allow any two robots in a network to gain new data by sharing their posterior pdfs, representing their estimate. However, DDF methods do not scale well as the number of robots in the network increase, since they frequently require all robots to process and communicate the full global pdf. In this talk I will show how the global problem can be “broken” into smaller locally relevant problems, thus significantly improves communication and computation requirements for each robot. I will present new scalable algorithms and demonstrate their applicability to collaborative inference problems with simulations and hardware experiments on robotic platforms.