Foroushani, N.S.,
Mohammadkhani, P.,
Rasti, J. Iranian Journal of Ageing (1735806X)19(2)pp. 314-327
Objective: One of the most common issues of old age is chronic pain. The aim of mindfulness-based approaches is to create psychological flexibility and change attitudes towards pain. The purpose of the present study was to investigate the effectiveness of mindfulness-based cognitive therapy on the symptoms of pain perception in older women Methods and Materials: It was a semi-experimental study with a pre-test, post-test and one-month follow-up with a control group. The statistical population was 60 and older women lived in Isfahan in 1402. Among them 30 women were selected by available sampling method and using the entry criteria and randomly assigned to equal experimental and control groups. The research tool was the short form of the McGill Pain Questionnaire. The experimental group received the intervention in eight two-hour sessions and the control group was placed on the waiting list. The data were analyzed by repeated measure analysis of variance using SPSS-25 software. Results: : The average age of the participants of the experimental group was 64.07±3.37 and the control group was 64.20±3.57, the average total pain perception score of the experimental and control groups was reported as 22.67 and 35.13 respectively in the post-test phase and 23.60 and 35.07 respectively in the follow-up phase. The results showed that there is a significant difference between the average of the three assessment stages (pre-test, post-test and follow-up) in all subscales and the total score (P<0.01). The results of the Benferroni test indicated the effectiveness of the treatment in the post-test and follow-up phase for all subscales compared to the control group (P≤0.05). Conclusion: The use of mindfulness-based cognitive therapy significantly leads to the reduction of pain perception in older women. © (2024), (Negah Institute for Scientific Communication). All Rights Reserved.
Emotion recognition is a challenging task due to the emotional gap between subjective feeling and low-level audio-visual characteristics. Thus, the development of a feasible approach for high-performance emotion recognition might enhance human-computer interaction. Deep learning methods have enhanced the performance of emotion recognition systems in comparison to other current methods. In this paper, a multimodal deep convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) network are proposed, which fuses the audio and visual cues in a deep model. The spatial and temporal features extracted from video frames are fused with short-term Fourier transform (STFT) extracted from audio signals. Finally, a Softmax classifier is used to classify inputs into seven groups: anger, disgust, fear, happiness, sadness, surprise, and neutral mode. The proposed model is evaluated on Surrey Audio-Visual Expressed Emotion (SAVEE) database with an accuracy of 95.48%. Our experimental study reveals that the suggested method is more effective than existing algorithms in adapting to emotion recognition in this dataset. © 2023 IEEE.
Behrouz jazi, A.H.,
Rasti, J.,
Etemadifar, M. Journal of Clinical Neuroscience (09675868)116pp. 104-111
Background: Patients with multiple sclerosis (MS) often experience balance issues during physical activities. Traditional rehabilitation exercises such as stretching, resistance, and aerobic training have been found to be effective, but can be repetitive and tedious, leading to reduced patient motivation and adherence. Furthermore, direct supervision by a therapist is not always possible. Methods: The aim of this study was to develop and evaluate the effectiveness of a virtual training program incorporating visual feedback from the Kinect® sensor in male patients with multiple sclerosis. Forty-five participants, with an age range of 22–56 years (mean age = 39), were randomly assigned to one of three equal groups, including two experimental groups and one control group. The experimental groups participated in eight-week exercise interventions, with each session lasting 20 to 30 min and occurring three times per week. In contrast, the control group received no interventions. Within the experimental groups, one was exposed to conventional balance exercises, whereas the other engaged in the proposed virtual training program. Both of these groups undertook three balance exercises, namely the single-foot stance, lunge maneuvers, and arm/leg stretching routines. The assessment encompassed diverse facets of balance, including parameters of 10 Meter Walk Time, Berg Balance Scale, Static Balance Score, and Time-Up and Go Scale, as well as the quality of life, gauged through the Multiple Sclerosis Quality of Life (MSQOL)-54 Questionnaire. The effect of test variables was investigated using analysis of covariance (ANCOVA), while the independent samples t-test was used to check for significant differences among the groups. The effects of the groups were compared using a paired samples t-test. Results: The findings revealed that both rehabilitation programs positively affected the dependent variables compared to the control group. However, the significant difference between the pre-test and post-test scores of the experimental groups indicated the effectiveness of the proposed program compared to the traditional method. Conclusions: Entertaining virtual training programs utilizing visual feedback can be effective for rehabilitating patients with MS. The proposed method enables patients to perform rehabilitation exercises at home with high motivation, while accurate information about the treatment process are provided to the therapist. © 2023 Elsevier Ltd
Bahadori, M.,
Rasti, J.,
Craig, C.M.,
Cesari, P.,
Emadi andani, M. Acta Psychologica (18736297)235
When interacting with the environment, sensory information is essential to guide movements. Picking up the appropriate sensory information (both visual and auditory) about the progression of an event is required to reach the right place at the right time. In this study, we aimed to see if general tau theory could explain the audiovisual guidance of movement in interceptive action (an interception task). The specific contributions of auditory and visual sensory information were tested by timing synchronous and asynchronous audiovisual interplays in successful interceptive trials. The performance was computed by using the tau-coupling model for information-movement guidance. Our findings revealed that while the auditory contribution to movement guidance did change across conditions, the visual contribution remained constant. In addition, when comparing the auditory and visual contributions, the results revealed a significant decrease in the auditory compared to the visual contribution in just one of the asynchronous conditions where the visual target was presented after the sound. This may be because more attention was drawn to the visual information, resulting in a decrease in the auditory guidance of movement. To summarize, our findings reveal how tau-coupling can be used to disentangle the relative contributions of the visual and auditory sensory modalities in movement planning. © 2023
Anesthesiology And Pain Medicine (22287531)13(1)
Background: Labor and delivery are physiological conditions that occur due to the contraction of the smooth muscles of the uterus. Labor pain is one of the most severe pains that anyone can experience, and its control is one of the most important goals of health care. Methods: This study was performed on 130 healthy pregnant women who had gestational ages of 37 to 40 weeks and were randomly assigned to the intervention and control groups using the closed envelope technique. Then a virtual reality (VR) headset containing a game was provided to the study subjects in the intervention group. The Harman Fear of childbirth questionnaire and visual analog scale (VAS) were completed at different times across labor according to the study protocol. The minimum time for using the headset was 20 minutes until the end of the first stage of labor. Data were analyzed using the chi-square test, independent t-test, and repeated measures test via SPSS software version 20. Results: The results showed a significant difference in pain score between the study groups. Despite expecting increasing pain intensity with labor progression, participants in the VR group reported less pain intensity and fear of labor pain compared to control subjects (F = 8.18, P < 0.05, between four and ten cervical dilatations). Conclusions: Virtual reality interventions can be regarded as a new non-pharmaceutical strategy to control labor pain and fear of normal vaginal delivery in pregnant women. © 2023, Author(s).
Varmaghani, S.,
Abbasi, Z.,
Weech, S.,
Rasti, J. Virtual Reality (13594338)26(2)pp. 659-668
Cybersickness describes the nausea and discomfort that frequently emerges upon exposure to a virtual reality (VR) environment. The extent to which cybersickness leads to temporary constraints in cognitive functioning after VR exposure is a critical aspect of evaluating the risk to human safety where VR tasks are used for workforce training. Here, we examined whether VR exposure results in deteriorated cognitive spatial ability and attention, and if this possible deterioration is related to cybersickness. A standardized cognitive test battery consisting of Corsi blocks task (CBT), Manikin spatial task (MST), and color trails test (CTT-A and -B) was administered before and after participants were exposed to virtual reality (VR group), or engaged in interactive board games (control group). The performance of participants in CBT remained unchanged from pre-test to post-test in both groups, while performance in MST improved in the control and remained stable in VR group. Response times in CTT-A remained stable in the VR group but reduced significantly in the control group. Regarding CTT-B, participants from both groups became significantly faster in post-test. We did not observe any significant sex differences, or effects of past VR experience, across measures of cognitive performance or cybersickness. Crucially, no significant correlations were found between cognitive performance changes and cybersickness scores in any cases. The results provide encouragement for the use of VR in professional settings, suggesting that VR and cybersickness may minimally limit subsequent cognitive processing. However, it will be crucial to further examine the aftereffects in other cognitive functions. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
Journal Of Medical Signals And Sensors (22287477)11(1)pp. 24-30
Background: Bone age assessment (BAA) is a radiological process with the aim of identifying growth disorders in children. The objective of this study is to assess the bone age of Iranian children in an automatic manner. Methods: In this context, three computer vision techniques including histogram of oriented gradients (HOG), local binary pattern (LBP), and scale-invariant feature transform (SIFT) are applied to extract appropriate features from the carpal and epiphyseal regions of interest. Two different datasets are applied here: The University of Southern California hand atlas for training this computer-Aided diagnosis (CAD) system and Iranian radiographs for evaluating the performance of this system for BAA of Iranian children. In this study, the concatenation of HOG, LBP, and dense SIFT feature vectors and background subtraction are applied to improve the performance of this approach. Support vector machine (SVM) and K-nearest neighbor are used here for classification and the better results yielded by SVM. Results: The accuracy of female radiographs is 90% and of male is 71.42%. The mean absolute error is 0.16 and 0.42 years for female and male test radiographs, respectively. Cohen's kappa coefficients are 0.86 and 0.6, P < 0.05, for female and male radiographs, respectively. The results indicate that this proposed approach is in substantial agreement with the bone age reported by the experienced radiologist. Conclusion: This approach is easy to implement and reliable, thus qualified for CAD and automatic BAA of Iranian children. © 2021 Isfahan University of Medical Sciences(IUMS). All rights reserved.
2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025pp. 362-367
Analysis and evaluating the behavior of different none-player characters (NPCs) plays a significant role in improving the playing experience of a video game. Furthermore, different playing archetypes can be acquired by changing the configuration of NPCs. Hence, by having an interpretable measurement for playing archetypes of NPCs, a wide range of behavior can be attained. Additionally, the aforementioned measurement gives insight of how the changes in the configuration of the NPC changes its behavior. The proposed method can significantly increase the interpretability of the NPCs and ease the designing of desired NPCs with appropriate behavior. In this paper, by comparing the behavior of different NPCs with that of the previously gathered data, cross-correlation as a similarity measurement is used to interpret the changes in the behavior of the NPCs, given their configurations. The proposed method can help game designers evaluate the performance of their NPCs. It is shown that by comparing the gathered data from the state of NPCs with that of users, useful information for determining the driving style of a NPC can be acquired. Afterwards, by setting references for behavior patterns, new NPCs can be evaluated and classified into different playing archetypes. Moreover, in this paper the importance of the cross-correlation values and also the fitted Gaussian model on this signal, in the behavior of different NPCs is discussed in detail. The test bed for the proposed method is a driving game in Unity game engine. © 2020 IEEE.
Machine Vision and Applications (14321769)31(7-8)
Eye-gaze tracking through camera is commonly used in a number of areas, such as computer user interface systems, sports science, psychology, and biometrics. The robustness of the head and camera rotation tracking algorithm has been a critical problem in recent years. In this paper, Haar-like features and a modified version of the group method of data handling, as well as segmented regression, are used together to find the base points of the eyes in a facial image. Then, a geometric transformation is applied to detect precise eye-gaze direction. The proposed algorithm is tested on GI4E and Columbia Gaze datasets and compared to other algorithms. The results show adequate accuracy, especially when the head/camera is rotated. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature.
EuroMediterranean Biomedical Journal (22797165)10(2)pp. 76-86
Myasthenia Gravis (MG) is an autoimmune disorder, which may lead to paralysis and even death if not treated on time. One of its primary symptoms is severe muscular weakness, initially arising in the eye muscles. Testing the mobility of the eyeball can help in early detection of MG. In this study, software was designed to analyze the abil-ity of the eye muscles to focus in various directions, thus estimating the MG risk. Pro-gressive weakness in gazing at the directions prompted by the software can reveal ab-normal fatigue of the eye muscles, which is an alert sign for MG. To assess the user’s ability to keep gazing at a specified direction, a fuzzy algorithm was applied to images of the user’s eyes to determine the position of the iris in relation to the sclera. The re-sults of the tests performed on 18 healthy volunteers and 18 volunteers in early stages of MG confirmed the validity of the suggested software. © EUROMEDITERRANEAN BIOMEDICAL JOURNAL
International Journal of Innovative Computing, Information and Control (13494198)9(6)pp. 2441-2464
Although color reduction is a common tool to simplify preliminary segmentation in color images, it does not show promising performance in case of outdoor images due to some complexities such as color variety, luminance effects, abundant texture details, and diversity of the objects in such images. In this paper, we propose a multi-stage color clustering procedure based on the well-known k-means algorithm, referred as GCE (Gradual Cluster Elimination). Here, a multi-resolution pyramid of the original image is exploited for deliberate expunging of texture details followed by a step-by-step clarification approach. Moreover, we gradually eliminate the apparent clusters while introducing new ones in each stage of the mentioned pyramid to consider all principal colors. The required similarity thresholds for color re-clustering are obtained automatically from multi-resolution images using their color distribution statistical characteristics. We have compared the performance of the GCE procedure and the standard k-means for color reduction on two outdoor datasets: University of Isfahan Data Set (UIDS), and Sowerby Image Dataset (SID) of British Aerospace. The experimental results have shown the advantages of the suggested procedure over traditional approaches, to name a few, improvement in the segmentation quality in terms of two well-known quantitative metrics: PRI and VoI, more accuracy and convergence speed, and simultaneously suppressing the over-segmentation and under-segmentation problems. The results of this research can be applied to many practical fields for segmentation in outdoor scenes, such as wearable computers, robotics, automatic vehicle control, and assisting the visually-impaired people. © 2013 ICIC International.
Annals of DAAAM and Proceedings of the International DAAAM Symposium (17269679)pp. 167-168
In this paper, the characteristics of the human perception system as well as the image features are exploited in a dual-resolution vision system for segmentation/object detection in outdoor scenes. The texture details are deliberately removed and similar color shades are combined together in a low-resolution version of the image, to reduce the excess image information. Using a color clustering algorithm, the color regions of the low-resolution image are found. Then, a weighted graph is constructed, whose nodes contain the detailed features of the regions, derived from the highresolution image. The weight of the edge between two nodes, Nodes' Merging Potential (NMP), denotes the advantage of merging them together to construct the fundamental image regions. This graph is then pruned regarding the NMP values, so that the main segments are developed and then identified. The proposed algorithm has shown high speed and accuracy for segmentation/object detection in outdoor scenes.
Expert Systems with Applications (09574174)38(10)pp. 13188-13197
Reducing the number of colors in an image while preserving its quality, is of importance in many applications such as image analysis and compression. It also decreases the memory and transmission bandwidth requirements. Moreover, classification of image colors is applicable in image segmentation, and object detection and separation, as well as producing pseudo-color images. In this paper, the Kohonen Self-Organizing Map Neural Network is employed to form an adaptive color reduction method. To enhance the performance of this method, we have used redundant features obtained by one-to-one functions from three main components of the color image (e.g. Red, Green and Blue channels). Exploiting these features will increase the color discrimination and details illustration ability of the network compared to the conventional approaches. This method leads to satisfactory results in image segmentation, especially in small object detection problems. It is also investigated that if the number of features in Kohonen network grows even by using non-deterministic one-to-one functions, the network revenue considerably improves. Moreover, we will study the effect of various adaptation algorithms in Kohonen network training stage. Again, using a multi-stage color reduction procedure which employs both Kohonen neural networks and conventional vector quantization schemes improves the performance. Several experimental results are represented to illustrate the characteristics of different approaches. © 2010 Elsevier Ltd. All rights reserved.
Annals of DAAAM and Proceedings of the International DAAAM Symposium (17269679)pp. 345-346
One of the problems in middle size robot soccer, which also can be applied to scanner robots routing in unpredictable environments, is leading passing among robots that comprises choosing the best team-mate to receive the ball without any need to explicit communication among robots. In this paper we have developed an algorithm based on Perceptron neural network for this problem which determines the best passing angle based on the topological data of the play field (i.e. position of robots). With a modification to Perceptron structure and proper data presentation approach a considerable improvement in solution performance has been achieved.