Available Technologies

Browse Penn-owned technologies available for licensing.

HOME SEARCH RSS FEED

Motion and Event Detection in Images using self-supervised Neural Networks

Description:

Brief Description:

A method for learning optical flow and motion using events detected in images using unsupervised convolutional neural networks.

 

Value Proposition:

A method to detect motion or an indication of an event from images obtained from multiple cameras using self-supervised neutral networks unlike the currently available algorithms such as optical flow. This technology developed by the researchers at the University of Pennsylvania is capable of predicting events from images with potentially higher temporal resolution than the currently available methods.

 

------------------------------------------------------------------------------------------------------

Technology:

Conventional cameras provide images where the intensity of a pixel represents the amount of light detect by the sensor. Event-based vision is an emerging technology wherein computer algorithms process imaginary obtained from a camera and provide time-stamped images where pixel intensity corresponds to dynamic changes in the images and how recent the change has been. Such images can be used as an input to a convolutional neural network to detect and predict events. Existing methods require the use training datasets to train convolutional neural networks that can subsequently predict motion. This process is computationally intense and those requires the method to discard image frames for real-time applications, thereby reducing the temporal resolution of the event detection.

 

Researchers at the University of Pennsylvania have developed an algorithm that can use a self-supervised convolutional neutral network to detect motion and predict an indication of an event from images obtained using one or multiple cameras. The method uses a multilayer convolutional neural network to detect motion and flow of the pixels at different spatial resolutions. The images obtained from multiple cameras can be used in the algorithm to more accurately associate discretized time stamps to each change, and ultimately improve the temporal resolution of event detection and prediction.

 

                 

Example of a timestamp image. Left: Grayscale image. Right: Timestamp image, where each pixel represents the timestamp of the most recent event. Brighter is more recent. 

 

Applications:

•       Navigation for self-driving automobiles and drones

•       Event based cameras

•       Self-supervised learning to detect motion and events using event-based cameras

•       Motion and event prediction from images

 

Advantages:

•       Enables use of self-supervised learning for motion detection in images

•       Unsupervised learning using a self-supervised neural network requires minimal or no manual labeling of a training dataset

•       High dynamic range in event-based vision enables more sensitive detection of motion and events.

------------------------------------------------------------------------------------------------------

 

Stage of Development:

•       Proof-of-concept demonstrated in a laboratory setting

 

Intellectual Property:

•       Provisional patent application (62/807,560)

 

 

Desired Partnerships:

License

Co-development

 

 Docket Number:  19-8754


Patent Information:
For Information, Contact:
Qishui Chen
Licensing Officer, SEAS/SAS Licensing Group
University of Pennsylvania
215-898-9591
qchen1@upenn.edu
Inventors:
Konstantinos Daniilidis
Alex Zhu
Keywords: