Anyone worried about the ability of artificial intelligences (AI) to mimic reality is likely to be concerned by Nvidia's latest offering: an image translation AI that will almost certainly have you second-guessing everything you see online.
In October, Nvidia demonstrated the ability of one of their AIs to generate disturbingly realistic images of completely fake people. Now, the tech company has produced one that can generate fake videos.
The AI does a surprisingly decent job of changing day into night, winter into summer, and house cats into cheetahs (and vice versa).
Best (or worst?) of all, the AI does it all with much less training than existing systems.
Like Nvidia's face-generating AI, this image translation AI makes use of a type of algorithm called a generative adversarial network (GAN).
In a GAN, two neural networks work together by essentially working against each other. One of the networks generates an image or video, while the other critiques its work.
Typically, a GAN requires a significant amount of labeled data to learn how to generate its own data. For example, a system would need to see pairs of images showing what a street looked like with and without snow in order to generate either image on its own.
However, this new image translation AI developed by Nvidia researchers Ming-Yu Liu, Thomas Breuel, and Jan Kautz can imagine what a snow covered version of a street would look like without ever actually seeing it.
Trusting Your Own Eyes
Liu told The Verge that the team's research is being shared with Nvidia's product teams and customers, and while he said he couldn't comment on how quickly or to what extent the AI would be adopted, he did note that there are several interesting potential applications.
"For example, it rarely rains in California, but we'd like our self-driving cars to operate properly when it rains," he said.
"We can use our method to translate sunny California driving sequences to rainy ones to train our self-driving cars."
Beyond such practical applications, the tech could have whimsical ones as well. Imagine being able to see how your future home might look in the middle of winter when shopping for houses, or what a potential outdoor wedding location will look like in the fall when leaves blanket the ground.
That said, such technology could have nefarious uses as well. If widely adopted, our ability to trust any video or image based solely on what our eyes tell us would be greatly diminished.
Video evidence could become inadmissible in court, and fake news could become even more prevalent online as real videos become indistinguishable from those generated by AI.
Of course, right now, the capabilities of this AI are limited to just a few applications, and until it makes its way into consumer hands, we have no way of telling how it will impact society as a whole.