Project Puts Real-Time Virtual Emotions On Your Stoic Face

In a demonstration of what is perhaps the freakiest technology ever, Stanford researchers have managed to take the facial expressions from one person's face and transplant them onto another person's face...in real time through video. Imagine that you're channeling your inner stoic, sitting without expression in a chair, but a live video feed of your face shows you smiling, singing, sticking out your tongue, or any number of other things.

Advertisement

A group of researchers from Stanford University, Max-Planck-Institute for Informatics, and the University of Erlangen-Nuremberg have detailed in a report how they transfer expressions from one person's face to another in real time. This involves tracking the expression of a source "actor" (the person making the expressions) and a target actor (the person whose face will be transformed).

A consumer-level depth camera (one per person) and a consumer-level PC is all the hardware needed to pull of the facial reenactments. As you can see in the video above, the technology works well — when the source actor looks to the side, the target actor does, too. If the source actor is talking the target actor talks, too — even though the actual target actor is simply staring at the camera. The reenactments (that is, the digital version of the target) are said to be both "highly stable" and photo-realistic.

Advertisement

One possible application of this technology is real-time video puppetry, in which a target is shown on the video while a source is present off-screen controlling the facial expressions and motions. There are some limitations to the technology, however, one of which is that it requires a desktop computer; the researchers say running its technology on a mobile platform is "infeasible." Lower-light environments, which produce noise in the footage, also cause issues.

Recommended

Advertisement