Tuesday, October 6, 2009

photosketch

Composition results, Chen et al., Siggraph Asia, 2009

Today's link: PhotoSketch: Internet Image Montage, published in SIGGRAPH Asia, 2009, appeared on a number of mainstream blogs today. Looks like a cool interface! There is no demo available, so it is hard to say how well it works. The basic idea is summarized in this video:

PhotoSketch: Internet Image Montage from tao chen on Vimeo.


The user specifies a background type, then sketches and names a few foreground objects. The background image is chosen based on flickr, google, or yahoo tags, consistency clustering, and an "uncluttered region" heuristic. The foreground images and segments were chosen using flickr etc. tags, a saliency heuristic, grab-cut, and, importantly, the sketched outline. They seem to put a lot of effort into the stitching & blending step. In any case, implementation aside, it is a pretty impressive end-to-end system, even with a limited set of object types. They tested it with novice users, which is impressive. Maybe that's the standard for SIGGRAPH -- certainly not for vision conferences. I'm guessing the system is still far from being deployable to the masses at this point, so "nutbastard" will have to wait to get his Death Star-zombie-Monty Python-nerd epic rendered. But, I look forward to it when it is ready for the mainstream!

No comments: