Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.
http://feeds.sciencedaily.com/~r/sciencedaily/~3/wY4_GzaZuHA/140804100559.htm
Extracting audio from visual information: Algorithm recovers speech from vibrations of a potato-chip bag filmed through soundproof glass
4 agosto 2014
Volver