Articles
Alamatsaz, N.,
Tabatabaei, L.,
Yazdchi, M.,
Payan, H.,
Alamatsaz, N.,
Nasimi, F. Biomedical Signal Processing and Control (17468108)90
Objective: Electrocardiogram (ECG) is the most frequent and routine diagnostic tool used for monitoring heart electrical signals and evaluating its functionality. The human heart can suffer from a variety of diseases, including cardiac arrhythmias. Arrhythmia is an irregular heart rhythm that in severe cases can lead to stroke and can be diagnosed via ECG recordings. Since early detection of cardiac arrhythmias is of great importance, computerized and automated classification and identification of these abnormal heart signals have received much attention for the past decades. Methods: This paper introduces a light Deep Learning (DL) approach for high accuracy detection of 8 different cardiac arrhythmias and normal rhythms. To employ DL techniques, the ECG signals were preprocessed using resampling and baseline wander removal techniques. The classification was performed using an 11-layer network employing a combination of Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). Results: In order to evaluate the proposed technique, ECG signals are chosen from the two physionet databases, the MIT-BIH arrhythmia database and the long-term AF database. The proposed DL framework based on the combination of CNN and LSTM showed promising results than most of the state-of-the-art methods. The proposed method reaches the mean diagnostic accuracy of 98.24%. Conclusion: A trained model for arrhythmia classification using diverse ECG signals were successfully developed and tested. Significance: This study presents a lightweight classification technique with high diagnostic accuracy compared to other notable methods, making it a potential candidate for implementation in Holter monitor devices for arrhythmia detection. Finally, we used SHapley Additive exPlanations (SHAP), the most popular Explainable Artificial Intelligence (XAI) method to understand how our model make predictions. The results indicate that those features (ECG samples) that have contributed the most to a prediction are consonant with clinicians’ decisions. Therefore, the use of interpretable models increases the trust of clinicians in AI and thus leads to decreasing the number of misdiagnoses of cardiovascular diseases. © 2023 Elsevier Ltd
Bazargani, M.,
Tahmasebi, A.,
Yazdchi, M.,
Baharlouei, Z. Journal Of Medical Signals And Sensors (22287477)13(4)pp. 272-279
Diagnosing emotional states would improve human-computer interaction (HCI) systems to be more effective in practice. Correlations between Electroencephalography (EEG) signals and emotions have been shown in various research; therefore, EEG signal-based methods are the most accurate and informative. Methods: In this study, three Convolutional Neural Network (CNN) models, EEGNet, ShallowConvNet and DeepConvNet, which are appropriate for processing EEG signals, are applied to diagnose emotions. We use baseline removal preprocessing to improve classification accuracy. Each network is assessed in two setting ways: subject-dependent and subject-independent. We improve the selected CNN model to be lightweight and implementable on a Raspberry Pi processor. The emotional states are recognized for every three-second epoch of received signals on the embedded system, which can be applied in real-time usage in practice. Results: Average classification accuracies of 99.10% in the valence and 99.20% in the arousal for subject-dependent and 90.76% in the valence and 90.94% in the arousal for subject independent were achieved on the well-known DEAP dataset. Conclusion: Comparison of the results with the related works shows that a highly accurate and implementable model has been achieved for practice. © 2023 Isfahan University of Medical Sciences(IUMS). All rights reserved.
Emotion recognition is a challenging task due to the emotional gap between subjective feeling and low-level audio-visual characteristics. Thus, the development of a feasible approach for high-performance emotion recognition might enhance human-computer interaction. Deep learning methods have enhanced the performance of emotion recognition systems in comparison to other current methods. In this paper, a multimodal deep convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) network are proposed, which fuses the audio and visual cues in a deep model. The spatial and temporal features extracted from video frames are fused with short-term Fourier transform (STFT) extracted from audio signals. Finally, a Softmax classifier is used to classify inputs into seven groups: anger, disgust, fear, happiness, sadness, surprise, and neutral mode. The proposed model is evaluated on Surrey Audio-Visual Expressed Emotion (SAVEE) database with an accuracy of 95.48%. Our experimental study reveals that the suggested method is more effective than existing algorithms in adapting to emotion recognition in this dataset. © 2023 IEEE.
The common disorder, Keratoconus (KC), is distinguished by cumulative corneal slimming and steepening. The corneal ring implantation has become a successful surgical procedure to correct the KC patient's vision. The determination of suitable patients for the surgery alternative is among the paramount concerns of ophthalmologists. To reduce the burden on them and enhance the treatment, this research aims to previse the ocular condition of KC patients after the corneal ring implantation. It focuses on predicting post-surgical corneal topographic indices and visual characteristics. This study applied an efficacious artificial neural network approach to foretell the aforementioned ocular features of KC subjects 6 and 12 months after implanting KeraRing and MyoRing based on the accumulated data. The datasets are composed of sufficient numbers of corneal topographic maps and visual characteristics recorded from KC patients before and after implanting the rings. The visual characteristics under study are uncorrected visual acuity (UCVA), sphere (SPH), astigmatism (Ast), astigmatism orientation (Axe), and best corrected visual acuity (BCVA). In addition, the statistical data of multiple KC subjects were registered, including three effective indices of corneal topography (i.e., Ast, K-reading, and pachymetry) pre- and post-ring embedding. The outcomes represent the contribution of practical training of the introduced models to the estimation of ocular features of KC subjects following the implantation. The corneal topographic indices and visual characteristics were estimated with mean errors of 7.29% and 8.60%, respectively. Further, the errors of 6.82% and 7.65% were respectively realized for the visual characteristics and corneal topographic indices while assessing the predictions by the leave-one-out cross-validation (LOOCV) procedure. The results confirm the great potential of neural networks to guide ophthalmologists in choosing appropriate surgical candidates and their specific intracorneal rings by predicting post-implantation ocular features. © 2023 The Authors