The researchers’ imaging system uses light reflections to detect – in real time – objects or people in a hidden scene and to measure their speed and trajectory. The system, which could be used with smartphone cameras, holds promise for applications such as self-driving cars and search and rescue, say the researchers.
The system works by analyzing video of the shadow – called a ” penumbra” – cast on the ground in the line of sight that is reflected by an object around a corner. Using imaging from the video, the system generates a series of one-dimensional images that – when stitched together – reveal information about the object(s) around the corner.
“Even though those objects aren’t actually visible to the camera, we can look at how their movements affect the penumbra to determine where they are and where they’re going,” says Katherine Bouman, the lead author of a paper about the system. “In this way, we show that walls and other obstructions with edges can be exploited as naturally-occurring ‘cameras’ that reveal the hidden scenes beyond them.”
Most traditional approaches for seeing around objects have used lasers, using time-of-flight cameras aimed at specific points visible to both the observable and hidden scene, and then measuring how long it takes for the light to return. However, say the researchers, these approaches are expensive to implement and easily thrown off by ambient light.
The MIT researchers’ “CornerCameras” approach, on the other hand, doesn’t require actively projecting light into the space, and can be used with off-the-shelf consumer cameras. Further, it works in a wider range of environments, including, the researchers found, even in rain.
“Given that the rain was literally changing the color of the ground, I figured that there was no way we’d be able to see subtle differences in light on the order of a tenth of a percent,” says Bouman. “But because the system integrates so much