Evolution of Natural Language Processing: A Review

Authors

  • Sharavana K HKBK College of Engineering, Bengaluru, Karnataka, India
  • Kedarnath Bhakta HKBK College of Engineering, Bengaluru, Karnataka, India
  • Jayanth Sai Chethan S HKBK College of Engineering, Bengaluru, Karnataka, India
  • Jayant Chand HKBK College of Engineering, Bengaluru, Karnataka, India
  • Meet Joshi K HKBK College of Engineering, Bengaluru, Karnataka, India

DOI:

https://doi.org/10.46610/JoKDSIM.2024.v01i01.004

Keywords:

Bidirectional Encoder Representations from Transformers (BERT), Contextual understanding, Generative pre-trained transformer, Long Short-Term Memory (LSTM), Natural Language Processing (NLP)

Abstract

Over the years, Natural Language Processing (NLP) has evolved dramatically, moving from early rule based systems to the current era dominated by advanced deep learning models. An overview of the significant turning points and patterns that have influenced the development of NLP is given in this study. In the early days of Natural Language Processing (NLP), rule based methods were the main focus. Linguists would manually create rules to analyze and comprehend human language. Although these systems were somewhat successful, they were unable to handle the complexity and unpredictability of spoken language. A major change was brought about by the introduction of probabilistic models and machine learning techniques with the emergence of statistical approaches. The development of methods like n gram models and hidden Markov models during this time allowed computers to handle linguistic patterns. Large scale linguistic resources like word embeddings and annotated corpora started to appear, which further accelerated the development of NLP. Machine learning algorithms have led to notable advancements in tasks such as machine translation, named entity recognition, and part of speech tagging. NLP has seen a revolution in recent years thanks to deep learning, which uses neural networks to learn intricate language representations. Sequential dependencies in language can now be better understood because of models like Long Short Term Memory Networks (LSTMs) and Recurrent Neural Networks (RNNs). The addition of attention mechanisms, as demonstrated by Transformer and other models, improves the model's ability to manage long range dependencies and perform better on a variety of NLP tasks. In the future, NLP will develop in ways that go beyond performance measurements, exploring interpretability, ethical issues, and the incorporation of multimodal data. As the area develops, it becomes increasingly important to eliminate biases, ensure ethical AI deployment, and improve user centric experiences. This introduction lays the groundwork for an in depth examination of the development of NLP, highlighting significant turning points, difficulties, and potential future directions in this vibrant and quickly developing subject.

Author Biographies

Sharavana K, HKBK College of Engineering, Bengaluru, Karnataka, India

Assistant Professor, Department of Information Science and Engineering

Kedarnath Bhakta, HKBK College of Engineering, Bengaluru, Karnataka, India

Under Graduate Student, Department of Information Science and Engineering

Jayanth Sai Chethan S, HKBK College of Engineering, Bengaluru, Karnataka, India

Under Graduate Student, Department of Information Science and Engineering

Jayant Chand, HKBK College of Engineering, Bengaluru, Karnataka, India

Under Graduate Student, Department of Information Science and Engineering

Meet Joshi K, HKBK College of Engineering, Bengaluru, Karnataka, India

Under Graduate Student, Department of Information Science and Engineering

Published

2024-04-15

How to Cite

Sharavana K, Kedarnath Bhakta, Jayanth Sai Chethan S, Jayant Chand, & Meet Joshi K. (2024). Evolution of Natural Language Processing: A Review. Journal of Knowledge in Data Science and Information Management, 1(1), 30–38. https://doi.org/10.46610/JoKDSIM.2024.v01i01.004

Issue

Section

Articles