Efficient Real-Time Pathfinding for Visually Impaired Individuals
Abstract
This paper presents a novel computer vision system, which enables real-time pathfinding for individuals with visual impairments. The navigation experience for visually impaired individuals has significantly improved 'in traditional segmentation methods and deep learning techniques'. Traditional methods usually focus on the detection of specific patterns or objects, requiring custom algorithms for each object of interest. In contrast, deep learning models such as instance segmentation and semantic segmentation allow for independent recognition of different elements within a scene. In this research, deep convolutional neural networks are employed to perform semantic segmentation of camera images, thereby facilitating the identification of patterns across the image's feature space. Motivated by a unique concept of a two-branch core architecture, we propose utilizing semantic segmentation to support navigation for visually impaired individuals. The 'demarcation path' captures spatial details with wide channels and shallow layers, while the 'path with rich features' extracts categorical semantics using deep layers. By providing awareness of both 'obstacles' and 'paths' in the surrounding vicinity, this method enhances the perceptual understanding of visually impaired individuals. We try to prioritize real-time performance and low computational overhead to ensure timely and responsive assistance. With a wearable assistive system, we demonstrate that semantic segmentation provides a comprehensive understanding of the surroundings to those with visual impairments. The experimental results showcase an impressive accuracy of 72.6% in detecting paths, path objects, and path boundaries. © 2025 The Authors.