Make3D – Virtualizing the Real World

March 7th, 2008 by Adrian

Check out this research project from the Stanford University CS department.  It generates 3D models from a single 2D image, using visual cues in color, size, and texture differences to infer depth.  Earlier this year, I was absolutely blown away by Photosynth’s ability to stitch together multiple photos into a single virtual panorama, but I couldn’t really tell if there was a way to extract a detailed mesh from that image space, or if each photo is essentially defining a single rectangular surface of its own.

Make3D is optimized for landscape scenes, and the Nov 2006 version of the code running on the project site tends to generate models resembling large rooms with irregular walls.  Large swaths of the photo get textured onto the walls and floor, a bit like an angular skybox.  But there are exceptions, and it’s pretty damn exciting when a detail like a support column or tree branch is accurately extracted, and you experience the parallax in a walk-through.  

Modders and game developers everywhere have got to be salivating over this stuff.  Someday you’ll be able to spend an afternoon walking around with a camera and come away with a detailed BSP map of a real-world space.  Not today, but someday.  And more and more it feels like someday real soon.

Posted in Games, Technobabble


(comments are closed).

About Sips from the Can

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aliquam justo tortor, dignissim non, ullamcorper at, lobortis vitae, risus. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam erat volutpat. Aenean mi pede, dignissim in, gravida varius, fringilla ullamcorper, augue.

(edit footer.php to change this text)