An architecture and data model to process multimodal evidence of learning

Shashi Kant Shankar, Adolfo Ruiz-Calleja, Luis P. Prieto, María Jesús Rodríguez-Triana, Pankaj Chejara

In learning situations that do not occur exclusively online, the analysis of multimodal evidence can help multiple stakeholders to better understand the learning process and the environment where it occurs. However, Multimodal Learning Analytics (MMLA) solutions are often not directly applicable outside the specific data gathering setup and conditions they were developed for. This paper focuses specifically on authentic situations where MMLA solutions are used by multiple stakeholders (e.g., teachers and researchers). In these situations, data may be gathered and processed in a less controlled fashion (compared to lab settings), due to contextual restrictions and local adaptations to the learning design. In this paper, we propose an architecture to process multimodal evidence of learning taking into account the situation’s contextual information. Our adapter-based architecture supports the preparation, organisation, and fusion of multimodal evidence, and is designed to be reusable in different learning situations. Moreover, to structure and organise such contextual information, a data model is proposed. Finally, to evaluate the architecture and the data model, we apply them to four authentic learning situations where multimodal learning data was collected collaboratively by teachers and researchers.