Multimodal Large Language Models for Diagnostic Feedback Analytics in STEM Learning Platforms
DOI:
https://doi.org/10.38124/ijsrmt.v4i1.1163Keywords:
Multimodal Learning Analytics, Large Language Models, Diagnostic Feedback, STEM Education, Explainable Artificial IntelligenceAbstract
The increasing complexity of STEM learning tasks and the scale of digital education environments have exposed fundamental limitations in traditional automated feedback systems. Most existing approaches rely on unimodal inputs or rule-based logic, providing surface-level feedback that fails to capture underlying learner misconceptions and reasoning processes. This study proposes and evaluates a multimodal large language model–driven diagnostic feedback framework designed to deliver accurate, explainable, and instructionally aligned feedback in STEM learning platforms. The framework integrates heterogeneous learner data, including text responses, symbolic mathematics, diagrams, code submissions, and interaction traces, through modality-specific encoders and attention-based fusion strategies. Diagnostic reasoning is performed using a multimodal large language model constrained by curricular objectives and enhanced with explainability mechanisms such as rationale tracing and attention visualization. Empirical evaluation across mathematics, physics, and computer science tasks demonstrates significant improvements over baseline systems in diagnostic accuracy, learning gains, error correction rates, learner engagement, and trust. The findings indicate that multimodal LLM-driven diagnostic feedback can operationalize formative assessment principles at scale, offering a robust pathway toward more transparent, adaptive, and pedagogically meaningful AI-supported learning in STEM education.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Scientific Research and Modern Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
PlumX Metrics takes 2–4 working days to display the details. As the paper receives citations, PlumX Metrics will update accordingly.