Evaluating Elementary-Level English Essays: Human-AI Synergy and the Role of Cognitive Load Theory
DOI:
https://doi.org/10.59075/rjs.v3i2.156Keywords:
Artificial Intelligence (AI); Human Evaluation; Cognitive Load Theory; Hybrid Grading System; Elementary-Level Writing AssessmentAbstract
This study examines the synergy between human evaluators and an AI-based system in assessing elementary-level English essays, focusing on key linguistic features such as grammar, syntax, spelling, content, and clarity. A dataset of 30 student-written essays is used to evaluate the effectiveness, reliability, and subjectivity of both evaluation methods. The boxplot comparing Human and AI evaluation scores offers insights into the evaluator’s scoring behaviours in applying the grading rubric. Cronbach's Alpha values indicate high internal consistency in both evaluation methods, with human evaluators demonstrating slightly greater reliability. The study also integrates Cognitive Load Theory (Sweller, 1988) to explain the cognitive demands of human evaluators versus the rule-based processing of AI. These findings suggest that while AI provides efficiency in mechanical assessments, human evaluators bring a nuanced understanding, emphasising the complementary roles of both in educational assessment. The study advocates for a hybrid approach that combines the strengths of both human and AI evaluations to enhance assessment fairness and accuracy.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Research Journal of Psychology

This work is licensed under a Creative Commons Attribution 4.0 International License.