Saturday, November 16, 2024
Google search engine
HomeGuest BlogsDeepfakes – Boon or Bane?

Deepfakes – Boon or Bane?

Applications which are making the use of camera are getting sophisticated day by day. Brand new filters are releasing each day such as the animal ears, removal of pimples, slim face and whatnot. Some applications also can create fake videos that are next to impossible to spot by the human eye. This Technology which has rapidly become accessible to the people is termed “Deepfakes”. Deepfakes, as the name suggests, use a form of Artificial Intelligence called deep learning, to make images of fake events.

In simple words, it can be described as a technology that manipulates digital content such as videos and yields images and sounds that are fabricated to such an extent, that it appears almost real. This technology is a big challenge to cybersecurity experts and the government. Though these technologies are smart enough to conquer the human brain, they are not human. We, humans, are the smartest creatures alive on this planet and should make use of awareness and vigilance as our weapons against this peril.

Whether deepfakes is a boon or a bane?” is a big question that is still kept unanswered. The deep learning algorithms that are being used in deepfakes can transform the lives of filmmakers and 3D artists as it will reduce the effort required for editing. You never know that your favorite 3D film may be using the same technology. Sometimes the film stars also become famous overnight because their videos get viral on the internet, and the audience browses them over and over again. The plastic surgeons can also get benefitted from this technology as they will be able to perform the reconstruction of the face virtually and will also be able to understand the stages involved in the surgery. This will indeed increase the success rates of such complex surgeries.

On the other hand, Deepfakes can turn out to be detrimental as well. Bogus misleading pieces of evidence may likely be created, leading to fake news being broadcasted that may ruin an innocent person’s life. This technology can easily be used for despicable purposes and extortion.

It is said that people believe what they see. However, the truth is the other way round. Human beings hunt for evidence that supports what they want to accept as truth and overlook the rest. Malevolent actors try to hack this human tendency using Generative Adversarial Networks (GANs), which in turn makes them powerful. GAN consists of two machine learning models, one model trains on a data set and creates video forgeries, while the other attempts to detect the fakes. The forger generates fakes until the other ML model cannot detect the forgery. The greater the set of training data, the easier it is for the forger to produce a convincing deepfake. We see this already with hoax news that creates misrepresentations that spread so rapidly under the name of the façade truth. By the time it gets identified, it becomes way too late and ruins everything.

Deepfakes detection is a tough problem. Unprofessional deepfakes can effortlessly be detected by the naked eye. Other ciphers that machines can spot comprise of a lack of eye blinking or obscurities that look erroneous. GANs that generate deepfakes is developing with time, and shortly we will have to count on digital forensics to detect deepfakes. DARPA is pitching money at researchers to discover enhanced techniques to validate video. However, because GANs can themselves be trained, to learn how to evade such forensics, it’s skeptical that we can conquer this technology or not.

Deepfakes can be detected using face-swapping which constructs resolution irregularities in the amalgamated image that can be recognized using deep learning techniques. Moreover, neural networks can also be used to detect the inconsistencies across the numerous frames in a video sequence that often results from face-swapping. Methods to detect digital manipulations such as scaling, rotation or splicing are being normally employed in deepfakes.
Ultimately, technological deepfake detection solutions, no matter how good they get, won’t preclude all deepfakes from getting circulated. And legal remedies, no matter how effective they might be, are usually applied after the fact. This means they will have inadequate effectiveness in addressing the probable damage that deepfakes can do, principally given the petite period that portrays the formation, distribution, and consumption of digital media.

As a result, improved public awareness needs to be a supplementary aspect of the tactic for battling deepfakes. When we see videos showing bizarre behavior, it will be significant, not to immediately adopt that the activities represented are genuine. When a prominent suspected deepfake video is issued, it will usually be likely to identify it within a few days or maybe a couple of hours, whether there is reliable evidence that it has been fabricated. That knowledge won’t stop deepfakes, but it can undoubtedly help minimize their influence.

Last Updated :
23 Mar, 2020
Like Article
Save Article


Previous

<!–

8 Min Read | Java

–>


Next


<!–

8 Min Read | Java

–>

Share your thoughts in the comments

RELATED ARTICLES

Most Popular

Recent Comments