|Title||RAPTOR technical report|
|Year of Publication||2014|
|Authors||Kelly, S, Byers, J, Aha, DW|
|Series Title||NCARAI Technical Note|
|Institution||Naval Research Laboratory, Navy Center for Applied Research in Artificial Intelligence|
In this technical report we present RAPTOR (Rapid Three-Dimensional Orientation Resolver), which is a novel pipeline for inferring solely from 2D image data the 3D position and orientation (pose) of known classes of rigid objects for which man-made 3D models are available. There are many existing systems and techniques that attempt to infer 3D meshes and scene information from 2D image or video data. Without domain-specific knowledge it would be difficult to generate a proportionally accurate 3D mesh based on a single still image with no depth information. Instead, RAPTOR takes advantage of the availability of existing highly accurate man-made 3D models for common rigid objects, such as vehicles, weapons, street signs, etc., to tackle the problem of 3D scene understanding in a class-specific fashion. In other words, RAPTOR leaves the problem of generating 3D meshes for terrain and uncommon or articulated objects to other systems and instead focus on finding the position and orientation of known, already-detected objects with a high degree of speed and accuracy.
NRL Publication Release Number: