Synthetic media with malicious intentions can change existing divisions in unprecedented ways, as they can break down language barriers and, unlike textual misinformation, even reach illiterates. Synthetic media, which is becoming increasingly realistic, can also have the unintended effect of giving malicious actors the opportunity to avoid being held accountable by suggesting that everything may be fake. If the misinformation of deepfakes and shallowfakes is not treated properly, the reputations of many of their subjects could suffer. “By these `shallowfakes,` I mean the tens of thousands of videos that are currently circulating around the world with malicious intent — not created with sophisticated AI, but often simply relabeled and uploaded, claiming that an event in one place just happened in another,” Gregory said. Face replacement (or face swapping) is another deepfake technique that transfers a person`s face to another body. This technique has gained notoriety for its extensive use in the production of deepfake pornography. But the area that has benefited the most from advances in AI research is face generation — the production of realistic faces that don`t really exist. The technique has reached phenomenal levels of realism through the use of generative adversarial networks (GANs), which are essentially two deep neural networks – the generator and the discriminator – trained to fool each other. The team also modified the model structure to train the model with the added training dataset. Through this trial and error, we have increased the accuracy to an above-average level and thus solved the problem. Above is one of the problem-solving methods to detect cheap fake media. Cheap fake media is produced in different forms in all industries. Various approaches and technologies for detecting low-cost counterfeits are also being developed.

Team Nine also works to develop and improve technologies to respond to digital manipulation. In the next issue, we will learn more about blocking images manipulated in advance. ※ This article has been written based on objective research and facts available at the time of writing, but the article may not represent the views of the Company. Text-to-speech can be used to trick people into thinking someone said something they didn`t. This is deception. This is false, as is lying. In particular, it`s wrong to scam people or create fake news, and voice conversion can make it easier to do good work in these areas. Companies in the synthetic media industry now have a duty to take control of their technology, educate the public about what is technically possible, and reduce the likelihood of people falling into the trap of misleading synthetic language.

Provenance (the subset of input elements that contributed to the output) is an approach that has been integrated into the deepfake detection work of institutions such as the US government`s DARPA Media Forensics team or startups like Truepic. The European Commission has allocated Horizon 2020 funds to InVID, a video authentication service, and Provenance, which will use blockchain technology to record trusted multimedia content. Shallowfakes and deepfakes pose some danger if the tools fall into the wrong hands. Given that we are dealing with software that is often widely used and distributed on the Internet, it is safe to say that both concepts pose an immediate and serious threat to society as a whole. A deepfake is a digitally false image or video of a person that makes them appear as someone else. This is the next step in creating fake content that uses artificial intelligence (AI). The previous article introduced deepfakes, which are synthesized images, videos, and other artificial intelligence (AI) media. Not interested in how media was manipulated before deepfake technology? Before the advent of AI technology, traditional methods of falsifying images and videos were used.

For example, video editing was mainly used to create videos that intentionally distorted video content, and image editing programs were used to create composite images for inappropriate purposes. If you go back further, people have manually edited documents or images by falsifying someone`s signature on a contract or drawing a mustache on a photo. Nowadays, as digital forgery and forgery methods have evolved significantly, people have started using specialized programs to edit content. This type of manipulation is called cheapfake, i.e. human-modified media using conventional and affordable technologies without much time and effort. It is also called a shallow fake as opposed to a deepfake. While many have praised AB 602, some believe Texas` AB 730 and SB 751 will not be effective because of the strong First Amendment protections surrounding political speech, especially online. The U.S. Senate has passed a bill requiring the Department of Homeland Security to monitor and report on technology that creates deepfakes. There is also currently a bill under consideration in the U.S. House of Representatives that is similar to California`s two new laws that would criminalize the well-known proliferation of superficial fakes and deepfakes by politicians in the run-up to elections. This law would give citizens whose images are used to create pornographic material without their consent a private cause of action.

Of course, these laws do not criminalize superficial fakes of Jim Acosta, Nancy Pelosi or Joe Rogan, because these videos do not fall under the obligation to be politicians in the run-up to an election cycle and are not pornographic. An example of accelerated superficial fakes is the video Sarah Sanders tweeted by CNN reporter Jim Acosta, which made her appear more aggressive than he actually was when talking to an intern. While deepfakes may be way ahead of the mainstream, there is already a problematic flood of misinformation that has yet to be resolved. Today`s fake news typically doesn`t use AI or complex technology. On the contrary, simple tricks such as mislabeling content to discredit activists or spread false information can be devastating and sometimes even lead to deadly violence, as happened in Myanmar. Image: Generated photos (deepfake gallery, AI-generated photos) These are superficial fakes, because the source material is original and actually exists. In fact, you could argue that a Snapchat filter is a superficial fake because it can erase all kinds of wrinkles. But I digress. Superficial fakes are not as convincing as deepfakes. Since it lacks the same depth of realism, it is much easier to detect superficial falsification with the senses. Another example of superficial fakes occurred in the summer of 2019, when a video circulated on social media that appeared to show House Speaker Nancy Pelosi obscuring her words in an interview as if she was intoxicated.

In fact, the video had been slowed down and manipulated by President Pelosi to make her appear drunk. A second superficial fake, which was supposed to show Nancy Pelosi at a press conference, was selectively edited to make it look like she was stuttering. The video was of high quality and prominent figures, including President Trump, believed in its authenticity. The tech industry has a unique opportunity to tackle “deepfakes” — the problem of fake audio and video files created with artificial intelligence — before they become a widespread problem, according to human rights activist Sam Gregory. While the fight against shallowfakes and deepfakes shows promising progress, some experts worry that it may never be enough to catch up with the latest technologies that could lead the public to believe that something is real. New algorithms can detect deepfakes, but unfortunately, it`s not a quick process.