CodeLens: An Automated Programming Assessment and Feedback System
Keywords:
AI-based feedback, AST, Automated code evaluation, Plagiarism detection, Programming instructionAbstract
The rapid growth of programming education has intensified the need for scalable, consistent, and intelligent evaluation systems capable of providing immediate and personalized feedback. Traditional manual grading is slow, subjective, and unable to capture deeper structural and semantic aspects of student code. This paper presents CodeLens, an automated multi-language programming assessment framework that integrates Abstract Syntax Tree (AST) analysis, static evaluation, plagiarism detection, secure sandbox execution, and AI-driven feedback generation. Unlike conventional output-based graders, CodeLens evaluates code holistically by analyzing syntactic structure, algorithmic flow, efficiency, readability, and originality. Its hybrid symbolic–AI approach enhances interpretability and fairness, while modular dashboards provide tailored insights for students, instructors, and administrators. The system’s AST-driven analytics detect logical errors, measure code quality, and identify structural similarities to prevent plagiarism. An AI-mediated feedback module translates complex code analysis into concise, actionable guidance, promoting learner autonomy and improving conceptual understanding. With its scalable architecture and customizable panels, CodeLens supports diverse institutional requirements and large-scale deployments. Overall, the study demonstrates that combining symbolic reasoning with LLM-based feedback significantly improves transparency, consistency, and pedagogical effectiveness in automated programming assessment.
References
Y. Song, C. Lothritz, X. Tang, and T. Bissyandé, “Revisiting Code Similarity Evaluation with Abstract Syntax Tree Edit Distance,” Association for Computational Linguistics, vol. 2, pp. 38–46, Jan. 2024, doi: https://doi.org/10.18653/v1/2024.acl-short.3
B. İçöz and G. Biricik, “Automated Code Review Using Large Language Models with Symbolic Reasoning,” 2025 9th International Symposium on Innovative Approaches in Smart Technologies (ISAS), pp. 1–5, Jun. 2025, doi: https://doi.org/10.1109/isas66241.2025.11101776
S. Park, H. Jin, J. Cha, and Y.-S. Han, “Detection of LLM-Paraphrased Code and Identification of the Responsible LLM Using Coding Style Features,” arXiv.org, 2025. https://arxiv.org/abs/2502.17749
M. Messer, N. C. C. Brown, M. Kölling, and M. Shi, “Automated Grading and Feedback Tools for Programming Education: A Systematic Review,” ACM Transactions on Computing Education, vol. 24, no. 1, p. 3636515, Dec. 2023, doi: https://doi.org/10.1145/3636515
H. Patil, S. Ambre, K. Bhosale, A. Singh, H. Jha, and Ankit Maurya, “Code Plagiarism and Originality Detection using Machine Learning for Ethical Code Practices,” International Journal of Intelligent Systems and Applications in Engineering, vol. 12, no. 3, pp. 209–215, Mar. 2023, [Online]. Available: https://ijisae.org/index.php/IJISAE/article/view/5242
G. Lee, J. Kim, M. Choi, R.-Y. Jang, and R. Lee, “Review of Code Similarity and Plagiarism Detection Research Studies,” Applied Sciences, vol. 13, no. 20, p. 11358, Jan. 2023, doi: https://doi.org/10.3390/app132011358
N. Siddiqui, Deepshikha, “Real-Time Code Plagiarism Detection Using NLP and Machine Learning for Academic and Industry Applications,” 2025. [Online]. Available: https://www.irjet.net/archives/V12/i6/IRJET-V12I689.pdf
E.-Q. Tseng, P.-C. Huang, C. Hsu, P.-Y. Wu, C.-T. Ku, and Y. Kang, “CodEv: An Automated Grading Framework Leveraging Large Language Models for Consistent and Constructive Feedback,” arXiv (Cornell University), pp. 5442–5449, Dec. 2024, doi: https://doi.org/10.1109/bigdata62323.2024.10825949
M. Pankiewicz and R. S. Baker, “Large Language Models (GPT) for automating feedback on programming assignments,” arXiv (Cornell University), Jun. 2023, doi: https://doi.org/10.48550/arxiv.2307.00150
R. Parvathy, M. G. Thushara, and J. M. Kannimoola, “Automated Code Assessment and Feedback: A Comprehensive Model for Improved Programming Education,” IEEE Access, pp. 1–1, 2025, doi: https://doi.org/10.1109/access.2025.3554838