Embedded Speech Encoder for Low-resource Languages
Abstract
Although high-performance artificial intelligence (AI) models require substantial computational resources, embedded systems are constrained by limited hardware capabilities, such as memory and processing power. On the other hand, embedded systems have a broad range of applications, making the integration of AI and embedded systems a prominent topic in both hardware and AI research. Creating powerful speech embeddings for embedded systems is challenging, as such models, like Wave2Vec, are typically computationally intensive. Additionally, the scarcity of data for many low-resource languages further complicates the development of high-performance models. To address these challenges, we utilized BERT to generate speech embeddings. BERT was selected because, in addition to producing meaningful embeddings, it is trained on numerous low-resource languages and facilitates the design of efficient decoders. This study introduces a compact speech encoder tailored for low-resource languages, capable of functioning as an encoder across a diverse range of speech tasks. To achieve this, we utilized BERT to generate meaningful embeddings. However, due to the high dimensionality of BERT embeddings, which imposes significant computational demands on many embedded systems, we applied dimensionality reduction techniques. The reduced-dimensional vectors were subsequently used as labels for speech data to train a model composed of convolutional neural networks (CNNs) and fully connected layers. Finally, we demonstrated the encoder's effectiveness through an application in speech command recognition. © 2024 IEEE.