Multimodal Large Language Models for Diagnostic Feedback Analytics in STEM Learning Platforms

Authors

  • Everlyne Fradia Akello The Gladys W. and David H. Patton College of Education, Ohio University, Athens, Ohio, USA
  • Onuh Matthew Ijiga Department of Physics, Joseph Sarwuan Tarka University, Makurdi, Nigeria
  • Idoko Peter Idoko Department of Electrical/ Electronic Engineering, University of Ibadan, Nigeria
  • Lawrence Anebi Enyejo Department of Telecommunications, Enforcement Ancillary and Maintenance, National Broadcasting Commission Headquarters, Aso-Villa, Abuja, Nigeria.

DOI:

https://doi.org/10.38124/ijsrmt.v4i1.1163

Keywords:

Multimodal Learning Analytics, Large Language Models, Diagnostic Feedback, STEM Education, Explainable Artificial Intelligence

Abstract

The increasing complexity of STEM learning tasks and the scale of digital education environments have exposed fundamental limitations in traditional automated feedback systems. Most existing approaches rely on unimodal inputs or rule-based logic, providing surface-level feedback that fails to capture underlying learner misconceptions and reasoning processes. This study proposes and evaluates a multimodal large language model–driven diagnostic feedback framework designed to deliver accurate, explainable, and instructionally aligned feedback in STEM learning platforms. The framework integrates heterogeneous learner data, including text responses, symbolic mathematics, diagrams, code submissions, and interaction traces, through modality-specific encoders and attention-based fusion strategies. Diagnostic reasoning is performed using a multimodal large language model constrained by curricular objectives and enhanced with explainability mechanisms such as rationale tracing and attention visualization. Empirical evaluation across mathematics, physics, and computer science tasks demonstrates significant improvements over baseline systems in diagnostic accuracy, learning gains, error correction rates, learner engagement, and trust. The findings indicate that multimodal LLM-driven diagnostic feedback can operationalize formative assessment principles at scale, offering a robust pathway toward more transparent, adaptive, and pedagogically meaningful AI-supported learning in STEM education.

Downloads

Download data is not yet available.

Downloads

Published

2025-01-30

How to Cite

Akello , E. F., Ijiga, O. M., Idoko, I. P., & Enyejo, L. A. (2025). Multimodal Large Language Models for Diagnostic Feedback Analytics in STEM Learning Platforms. International Journal of Scientific Research and Modern Technology, 4(1), 182–210. https://doi.org/10.38124/ijsrmt.v4i1.1163

PlumX Metrics takes 2–4 working days to display the details. As the paper receives citations, PlumX Metrics will update accordingly.

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)

1 2 > >>