SummarifyAI: An Automatic Text Summarisation Tool

Authors

  • Svayam S Topajiche
  • Samiksha S Kushalkar
  • Mithali R Sadekar
  • Omkar V Timmapur
  • Vijaylaxmi C. Kalal

Keywords:

Deep Learning, Generative AI, Natural language processing, Real Time Output, SummarifyAI Platform

Abstract

In the era of information overload, the ability to efficiently extract relevant information from extensive textual content is increasingly critical. This paper presents SummarifyAI, an intelligent, deep learning-based text summarization system that generates concise and coherent summaries from long-form documents. Built using a Transformer-based architecture and integrated into an interactive web application via Stream lit, SummarifyAI employs a pre-trained natural language processing model fine-tuned for abstractive summarization tasks. The system enhances comprehension, saves reading time, and supports users across domains such as academia, journalism, and legal research. We also address performance benchmarks, model optimization techniques, and user interface considerations to ensure an effective and user-friendly summarization experience. Experimental results demonstrate the efficacy of SummarifyAI in producing high-quality summaries with minimal information loss.

References

M. M. Saiyyad and N. N. Patil, “Text summarization using deep learning techniques: A review,” Engineering Proceedings, vol. 59, no. 1, pp. 194, Jan. 2024, doi: https://doi.org/10.3390/engproc2023059194

G. Wang and W. Wu, “Surveying the landscape of text summarization with deep learning: A comprehensive review,” Discrete Mathematics, Algorithms and Applications, vol. 16, no. 03, Dec. 2023, doi: https://doi.org/10.1142/s1793830923300047

H. Zhang, X. Liu, and J. Zhang, “HEGEL: Hypergraph transformer for long document summarization,” Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, United Arab Emirates, pp. 10167–10176, Jan. 2022, doi: https://doi.org/10.18653/v1/2022.emnlp-main.692

M. Guo et al., “LongT5: Efficient text-to-text transformer for long sequences,” arXiv.org, May 03, 2022. https://arxiv.org/abs/2112.07916

R. M. Alguliev, R. M. Aliguliyev, and N. R. Isazade, “Multiple documents summarization based on evolutionary optimization algorithm,” Expert Systems with Applications, vol. 40, no. 5, pp. 1675–1689, Apr. 2013, doi: https://doi.org/10.1016/j.eswa.2012.09.014

R. M. Alguliev, R. M. Aliguliyev, and C. A. Mehdiyev, “pSum‐SaDE: A modified p‐median problem and self‐adaptive differential evolution algorithm for text summarization,” Applied Computational Intelligence and Soft Computing, vol. 2011, no. 1, Jan. 2011, doi: https://doi.org/10.1155/2011/351498

R. M. Alguliyev, R. M. Aliguliyev, and N. R. Isazade, “An unsupervised approach to generating generic summaries of documents,” Applied Soft Computing, vol. 34, pp. 236–250, Sep. 2015, doi: https://doi.org/10.1016/j.asoc.2015.04.050

Rasim Alguliyev, R. M. Aliguliyev, N. R. Isazade, A. Abdi, and N. Idris, “COSUM: Text summarization based on clustering and optimization,” Experts Systems, vol. 36, no. 1, pp. e12340–e12340, Oct. 2018, doi: https://doi.org/10.1111/exsy.12340

Q. A. Al-Radaideh and D. Q. Bataineh, “A hybrid approach for arabic text summarization using domain knowledge and genetic algorithms,” Cognitive Computation, vol. 10, no. 4, pp. 651–669, Mar. 2018, doi: https://doi.org/10.1007/s12559-018-9547-z

M. Lewis et al., “BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” arXiv (Cornell University), Oct. 2019, doi: https://doi.org/10.48550/arxiv.1910.13461

C. Raffel, N. et al., “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of Machine Learning Research, vol. 21, no. 140, pp. 1–67, 2020, Available: https://www.jmlr.org/papers/v21/20-074.html

Y. Liu and M. Lapata, “Text summarization with pre-trained encoders,” Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 3730–3740, Hong Kong, China, Nov. 2019, Available: https://aclanthology.org/D19-1387.pdf

M. Wang, P. Xie, Y. Du, and X. Hu, “T5-based model for abstractive summarization: A semi-supervised learning approach with consistency loss functions,” Applied Sciences, vol. 13, no. 12, p. 7111, Jan. 2023, doi: https://doi.org/10.3390/app13127111

T. Shi, Y. Keneshloo, N. Ramakrishnan, and C. K. Reddy, “Neural Abstractive Text Summarization with Sequence-to-Sequence Models,” ACM/IMS Transactions on Data Science, vol. 2, no. 1, pp. 1–37, Jan. 2021, doi: https://doi.org/10.1145/3419106

Published

2025-06-16

How to Cite

Svayam S Topajiche, Samiksha S Kushalkar, Mithali R Sadekar, Omkar V Timmapur, & Vijaylaxmi C. Kalal. (2025). SummarifyAI: An Automatic Text Summarisation Tool. International Journal of AI and Machine Learning Innovations in Electronics and Communication Technology, 1(1), 20–28. Retrieved from https://matjournals.net/engineering/index.php/IJAIMLECT/article/view/2027

Issue

Section

Articles