In current photo-editing software we can only manipulate elements and objects of a photo on a 2D plane. We can outline areas and apply a robust amount of manipulation; however, this only effects our direct perspective.
Take this book for instance. What’s happening to the pixels that would make up the spine?
In the physical realm it surely exists.
In our photo it should still exist in theory, but un-captured pixels can’t appear out of thin air.
This has been the primary obstacle in manipulating elements on a 3D plane in photo editing software. A Carnegie Mellon project which recieved funding from Google’s research grant program has come up with an approach that solves the problem of the “un-captured pixel” by using manipulable models.
A database of 3D models exists to determine the shape of “un-captured pixels”
For the color they use the symmetry of the object to copy what is on the facing side. For the parts that can’t be colored like the underside of the taxi, they use the texture that comes with the model.
As users we have to provide models for everything we want to manipulate. We cannot simply isolate any element of a photo and start spinning it around to see un-captured pixels come to light.
This is big step for intuitive 3D manipulation
Some of the current obstacles involve pixel details and camera vantage points, such as taking a photo of a wine bottle from top-down. This makes it hard for their approach to overcome the symmetrical obstacles caused by these vantages.
You can read more about the complete Carnegie Mellon project here:
Along a similar vein of manipulating the 3D plane in photos, check out SIGGRAPH: