The Description of the Collaborative Research Program's Human computation Projects goes here.
Project 1: Human Computation
Project 2: Eye Tracking for Personalized Photographys
Members: Mario Rodriguez, James Davis, Reid Porter
Humans can be used as a processor for certain tasks, in the same way that CPUs and GPUs are now used. Rather than thinking of humans as the primary director of computation, with computers as their subordinate tools, we explicitly advocate treating these computational platforms equally and characterizing the performance of Human Processing Units (HPUs). We expect to find that on some tasks HPUs outperform CPUs, and on some tasks the reverse is true, as cartooned in figure 1. The most efficient systems are likely to be joint systems that make use of each type of processor on the subroutines for which it is most appropriate. Importantly, we believe that HPUs are likely to have a long term impact on the way real‐world applications that require “computer vision” are developed.
The spread of networked computing has given rise to HPUs as a viable computational platform. Tools like Amazon Mechanical Turk allow small jobs to be micro‐outsourced to humans for completion. The intermediation that the network provides allows the requester to abstract the complexities of who is accomplishing the task, and from where. Nearly all current use of micro‐outsourcing is similar to the traditional way we might use an employed assistant in our office, to outsource from human to human. This proposal explicitly suggests we should quantify performance, treat this as a new computational platform, and build real systems which make use of HPU co‐processors for certain tasks which are too computationally expensive, or insufficiently robust when computed on CPUs.
Eye Tracking for Personalized Photographys
Members: Steve Scher, James Davis, Sriram Swaminarayan
Photography and videography are the premier means of sharing an individual's experience of 'being somewhere' and 'seeing something.' Excellent photos communicate to the viewer what parts of the scene the photographer especially noticed, reflecting not only the scene in front of the camera, but the experience of the photographer behind the camera. Two photographers holding cameras in the same place at the same time have different experiences, and this should be captured in the photos they share.
We use eye-tracking technology to record which parts of a scene the photographer finds most interesting, and automatically apply Photoshop-like manipulations to emphasize these areas and de-emphasize those areas that the photographer's eye ignored. In so doing, we draw the photo viewer's eye to the same parts of the scene to which the photographer's eye was drawn, bringing the viewer's experience closer to that of the photographer. We focus on consumer photography scenarios; related surveillance and cinematic usage cases are also in consideration.