Generative AI Models: A Comparative Analysis

Authors

  • Dattatray G. Takale
  • Parikshit N. Mahalle
  • Bipin Sule

Keywords:

ChatGPT, Generative Adversarial Networks (GANs), Generative Artificial Intelligence (GAI), Variational Autoencoders (VAE), OpenAI

Abstract

A comprehensive comparative analysis is conducted in this paper on key Generative Artificial Intelligence (GAI) models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs) and Transformers. This study looks into their architectures, training methods, applications, strong points and shortcomings. GANs are essentially based on the framework and then employ adversarial training; while VAEs are probabilistic encoders and decoders. Transformers on the other hand can handle long-range dependencies beautifully; we explore how they perform in different domains like image, text, music and video generation. This includes both quantitative measures of success and qualitative assessments. In terms of their advantages and drawbacks, every model despite its advancement has its own distinctive features. One problem is that GANs can produce high-quality images they also collapse at multi-task learning stages. The references in this comparative study are valuable for novices who wish to use the right Generative AI model when tackling particular problems; moreover, these findings both inspire and point the way forward to scholars working in this field.

Published

2024-04-11

How to Cite

Dattatray G. Takale, Parikshit N. Mahalle, & Bipin Sule. (2024). Generative AI Models: A Comparative Analysis. Journal of Computer Science Engineering and Software Testing, 10(1), 32–38. Retrieved from https://matjournals.net/engineering/index.php/JOCSES/article/view/295

Issue

Section

Articles