- One goal of this project is to create a new “view” of a scene that was not explicitly captured by a camera or set of cameras. We will do this by capturing video data of a common field of interest, from moving personal video devices (body-cams) and fixed surveillance cameras, and produce one or more synthetic scenes (output videos), each of which is “more than the sum of the parts”. The output videos are renderings of the video data captured by all of the cameras, as seen from the specified synthetic viewpoint. This would allow investigators to look at the scene from a particular point of view. Missing data will be indicated, as will areas of low detail arising from low input camera resolution.
- We will also develop methods, using tools from computer vision and machine learning (in particular deep learning), to annotate the synthesized videos to indicate persons, vehicles, events and other possible objects of interest. These methods can also be used to quickly summarize events in the video.
- An interactive dashboard will be created for both end users and developers. The visual analytic interface will allow interactive viewpoint selection and camera source visualization to show which cameras contributed to which portions of the reconstructed video. Highlighting and searching for vehicles, events, and objects, will be provided, as well as a visualization of the video summary. The dashboard will also allow selective visualization of a highlighting of missing and low-detail video. A developer/analyst dashboard will also be developed for visualizing performance of key portions of the video analytic workflow and algorithms.
Joint Exploitation of Personal and Premises Surveillance Video
- Project Fact Sheet