An Artificial Intelligence (AI)-Powered Voice-Based Intelligent Learning System for Visually Impaired Students

An Artificial Intelligence (AI)-Powered Voice-Based Intelligent Learning System for Visually Impaired Students
email: rizkyrinaldi.staira@gmail.com

A B S T R A C T

The rapid expansion of digital learning platforms has created unprecedented educational opportunities; however, the majority of these platforms remain inaccessible to the estimated 253 million visually impaired individuals worldwide, particularly students in inclusive K-12 settings. Existing assistive technologies such as screen readers and Braille displays function as access tools rather than pedagogically designed learning environments, leaving a critical gap in inclusive educational technology. This paper presents BlindLearn, an AI-powered, voice-based learning framework developed and evaluated using Design Science Research (DSR) methodology. Grounded in Universal Design for Learning (UDL) and Cognitive Load Theory (CLT), BlindLearn introduces the Voice-First Pedagogical Model (VFPM) — a novel five stage learning cycle (Audio Activation, Narrative Input, Conversational Elaboration, Voice Practice, Adaptive Feedback) designed for auditory primary learners. The framework was developed through systematic literature review (47 papers, 2015–2024), structured needs analysis (n = 23), multi-expert validation using Content Validity Ratio (CVR, n = 8), and usability evaluation using the System Usability Scale (SUS, n = 15). Expert validation yielded a mean CVR of 0.89 (p < .05), and usability evaluation produced a mean SUS score of 84.3 (Grade: Excellent). Three original artifacts are contributed: the VFPM theoretical model, a validated four-layer AI system architecture, and twelve evidence-based inclusive design guidelines, advancing the fields of educational technology and inclusive AI system design.

Full Paper PDF