Threat of Deepfake Enabled Attack

Authors

  • Ateeq Ahmed HKBK College of Engineering, Bengaluru, Karnataka, India
  • Aditya singh HKBK College of Engineering, Bengaluru, Karnataka, India
  • Azain Khan HKBK College of Engineering, Bengaluru, Karnataka, India
  • Mahfuz Ahmed HKBK College of Engineering, Bengaluru, Karnataka, India
  • Injamoul Hoque HKBK College of Engineering, Bengaluru, Karnataka, India

DOI:

https://doi.org/10.46610/JoISSCCR.2024.v01i01.004

Keywords:

Artificial Intelligence (AI), Autoencoders, DeepFake, Deep learning, GAN model (Generative Adversarial Network model)

Abstract

DEEPFAKE is a technique that involves changing a person's face in a video to that of a targeted person, acting as though the targeted person is saying the words another person said, and making the person's face express like the targeted person's. Deepfake approaches refer to face swapping or facial expression alteration, particularly in images and videos. False photos and films circulating online can readily take advantage of some people, which has recently become a public problem. DeepFake is called because it manipulates the face of an individual known as the source in a picture or video of another person known as the target to produce fake content. Over the past few decades, rapid breakthroughs in AI, machine learning, and deep learning have led to the development of new techniques and a variety of tools for manipulating multimedia. Technology has generally been used for positive purposes, such as education and entertainment, yet dishonest people have exploited it for darker or illicit purposes. For instance, realistic-looking, high-quality phoney audio, video, or image content has been produced to propagate false information and propaganda, inciting hatred and political unrest or even harassing and blackmailing individuals. The word "deepfake" refers to the extremely replicated, lifelike, and recently popularized altered videos. Since then, a few tactics to deal with the problems raised by Deepfake have been described in the literature to present a current summary of the research on Deepfake detection.

Author Biographies

Ateeq Ahmed, HKBK College of Engineering, Bengaluru, Karnataka, India

Assistant Professor, Department of Information Science and Engineering

Aditya singh, HKBK College of Engineering, Bengaluru, Karnataka, India

Under Graduate Student, Department of Information Science and Engineering

Azain Khan, HKBK College of Engineering, Bengaluru, Karnataka, India

Under Graduate Student, Department of Information Science and Engineering

Mahfuz Ahmed, HKBK College of Engineering, Bengaluru, Karnataka, India

Under Graduate Student, Department of Information Science and Engineering

Injamoul Hoque, HKBK College of Engineering, Bengaluru, Karnataka, India

Under Graduate Student, Department of Information Science and Engineering

Published

2024-04-26

How to Cite

Ateeq Ahmed, Aditya singh, Azain Khan, Mahfuz Ahmed, & Injamoul Hoque. (2024). Threat of Deepfake Enabled Attack. Journal of Information Security System and Cyber Criminology Research, 1(1), 24–31. https://doi.org/10.46610/JoISSCCR.2024.v01i01.004

Issue

Section

Articles