Deepfakes Only Require ONE Image
Deepfakes are synthesized images of real people generated by an GAN AI (generative adversarial network artificial intelligence). The technology is obviously popular with pornography where miscreants make a lot of money selling fake videos of non-pornstar celebrities engaging in illustrious acts. Celebrities were the initial targets of deepfakes because the GAN required a lot of images from multiple angles in order to generate the appropriate renderings. Thanks to Samsung, advancements in deepfake technology now allow single image. Dartmouth researcher Hany Farid, a deepfake forensics analyst says, "Following the trend of the past year, this and related techniques require less and less data and are generating more and more sophisticated and compelling content ... these results are another step in the evolution of techniques ... leading to the creation of multimedia content that will eventually be indistinguishable from the real thing."
As technology begins to make fake audio and real-time fake video editing into commodity tools, the added ability to overlay people from a single image will have dramatic impacts on society. These will be misused in political campaigns, news reports, PSYOP campaigns, school bullying, divorce cases ... the possibilities to easily ruin lives are endless. Spotting deepfakes now is already difficult and usually only obvious when the plausibility is incredulous, but it will become a more challenging field as the technology advances.