Assessing Generative AI-Generated Assignments: A Framework Based on the 3R

Authors

  • Dattatray G. Takale
  • Parikshit N. Mahalle
  • Bipin Sule

Keywords:

AI ethics, Automated grading, Educational assessment, Generative AI, 3R framework

Abstract

The advent of Generative AI technologies in education, particularly for automated grading systems, necessitates a nuanced approach to evaluating AI-generated assignments. This paper introduces the 3R Framework Reliability, Relevance, and Robustness as a novel methodology for grading assignments produced by generative AI. Reliability focuses on the consistency and accuracy of the generated content, ensuring it meets academic standards. Relevance examines the alignment of the content with the given prompts and its applicability to the subject matter, providing it addresses the core objectives of the assignment. Robustness assesses the resilience of the content against biases and errors, highlighting its ability to withstand diverse educational contexts. By integrating principles from AI ethics and academic assessment, the 3R Framework addresses the unique challenges posed by AI-generated assignments' creative and subjective nature. This comprehensive approach ensures the fair and effective evaluation of generative AI content and promotes the ethical use of AI in education. The proposed framework is a critical tool for educators and institutions aiming to leverage AI for grading while maintaining high academic integrity and educational quality standards.

Published

2024-07-06

How to Cite

Dattatray G. Takale, Parikshit N. Mahalle, & Bipin Sule. (2024). Assessing Generative AI-Generated Assignments: A Framework Based on the 3R. Journal of Data Mining and Management, 9(2), 11–16. Retrieved from https://matjournals.net/engineering/index.php/JoDMM/article/view/652

Issue

Section

Articles