Asma Sbaih
Sevile University, Spain
Scientific Tracks Abstracts: J Neurol Disord
Restoring clear speech from brain activity is an important goal for creating neural speech devices, particularly for people with serious communication difficulties. This study presents a deep learning framework aimed at reconstructing speech from stereo-electroencephalography (sEEG) signals. It uses the iBIDS dataset, which includes invasive brain recordings from patients performing controlled tasks involving Dutch word production. The model includes a hybrid CNN-BiLSTM structure. Convolutional layers focus on the spatial features present in the neural signals, while bidirectional LSTM layers capture the temporal patterns of brain activity linked to speech production. To tackle the problem of limited data and improve the model's generalization, we use a transfer learning approach. This begins with initializing the CNN layers using weights from a pretrained electrocorticography (ECoG) dataset gathered from English-speaking participants during sentence reading. We then fine-tune these layers with the sEEG data to adjust for specific patterns related to the modality and language. The neural network produces 80-dimensional Mel spectrograms from the sEEG inputs. These spectrograms are then converted into understandable speech waveforms using a HiFi-GAN neural vocoder. This method guarantees high-quality audio reconstruction with better perceptual quality. Evaluation results show that the transfer learning model significantly outperforms a baseline model that was trained from the ground up. It scores higher on objective metrics like STOI and PESQ, while also achieving a lower Mel Cepstral Distortion (MCD). This research offers a scalable and adaptable neural model for decoding speech from brain signals. It holds promise for future brain-computer interface (BCI) technologies that aim to restore natural communication in clinical settings.
Asma Sbaih is a neuroengineer and AI researcher specializing in medical image processing, neuroimaging, and deep learning. She holds a PhD in Neuroengineering and Artificial Intelligence, with a focus on speech restoration from brain signals. Her research integrates EEG-based neural decoding, brain-computer interfaces, and AI-driven healthcare analysis. Asma currently serves as a research assistant and lecturer in the Engineering Department at Al-Ahliyya University. She has published extensively on invasive and non-invasive neuroimaging, speech decoding, and assistive technologies. Her work applies advanced signal processing and machine learning to develop innovative neuro-AI solutions. Asma is dedicated to advancing intelligent systems for medical and neurological applications.
Neurological Disorders received 1343 citations as per Google Scholar report