Algorithm for semantic labeling of objects during localization and mapping in robotic vision

Probabilistic data association technique for labeling, mapping, and sensing objects during robotic traversal. 

Technology Overview:

Simultaneous Localization and Mapping (SLAM) is a well-studied problem in robotics wherein a robot or autonomous vehicle uses its sensors to localize itself in relation to the environment, then updates a map of what it has seen as it traveled. These maps are typically point clouds that provide obstacle positions without the providing any context.

Penn professors Konstantinos Daniilidis and George Pappas have created a new algorithm that is more robust at recognizing new objects (such as a door or chair in an office setting) and classifying them correctly regardless of object orientation or sensor pose. This results in the first method to tightly couple inertial, geometric, and semantic observations into a single optimization framework. The method has been tested successfully in indoor and outdoor scenarios, demonstrating applicability for autonomous vehicle or home and warehouse robot applications.

Advantages:

  • Low computational cost
  • No expensive hardware required
  • Real-time SLAM and object recognition

Stage of Development:

Prototype and software developed for integration with robotic platforms

Intellectual Property:

Reference Media:

Desired Partnerships:

  • License
  • Co-development

Patent Information:

Contact

Terry Bray

Executive Director, SEAS/SAS Licensing Group
University of Pennsylvania

INVENTORS

Keywords

Docket # 17-8324