Articles
Digital Signal Processing: A Review Journal (10954333)164
Multi-frame image super-resolution represents an efficacious albeit expensive and resource-intensive technique for image reconstruction, necessitating substantial memory allocation for data storage. To mitigate the computational burden inherent in multi-frame image super-resolution algorithms, a strategic approach involves curtailing the processing load by disregarding redundant frames. In this study, we introduce a novel frame selection algorithm tailored to identify an optimal minimum number of frames. This approach ensures the fidelity of the reconstructed high-resolution (HR) image while significantly alleviating the procedural complexity and memory demands of the super-resolution process. The frame selection methodology we propose is founded upon multi-channel sampling, reference frame selection, and the maximization of the lower bound on the signal-to-noise ratio. Specifically, our approach is operationalized through two optimization algorithms based on priority search. The initial algorithm identifies cases with maximum non-empty channels by exploring the predefined domain encompassing all feasible desired positions. In the subsequent algorithm, the process entails identifying, for any channel within any discovered case, a frame associated with the minimum translation function model noise. Subsequently, the total noise of each case is computed. We ascertain the optimal case along with a collection of frames that correspond to the minimum total noise. Experimental findings highlight the efficacy of our proposed method in mitigating super-resolution complexity while achieving high-fidelity HR images that closely match or surpass those generated from complete frame sets. Comparative analysis against established super-resolution (SR) algorithms demonstrates the remarkable speed and minimal computational overhead of our proposed approach, rendering it exceptionally efficient with negligible runtime. © 2025 Elsevier Inc.
PeerJ Computer Science (23765992)10
In this article, a novel method for removing atmospheric turbulence from a sequence of turbulent images and restoring a high-quality image is presented. Turbulence is modeled using two factors: the geometric transformation of pixel locations represents the distortion, and the varying pixel brightness represents spatiotemporal varying blur. The main framework of the proposed method involves the utilization of low-rank matrix factorization, which achieves the modeling of both the geometric transformation of pixels and the spatiotemporal varying blur through an iterative process. In the proposed method, the initial step involves the selection of a subset of images using the random sample consensus method. Subsequently, estimation of the mixture of Gaussian noise parameters takes place. Following this, a window is chosen around each pixel based on the entropy of the surrounding region. Within this window, the transformation matrix is locally estimated. Lastly, by considering both the noise and the estimated geometric transformations of the selected images, an estimation of a low-rank matrix is conducted. This estimation process leads to the production of a turbulence-free image. The experimental results were obtained from both real and simulated datasets. These results demonstrated the efficacy of the proposed method in mitigating substantial geometrical distortions. Furthermore, the method showcased the ability to improve spatiotemporal varying blur and effectively restore the details present in the original image. © Copyright 2024 Jafaei et al
IET Computer Vision (17519640)18(2)pp. 191-209
The position of vehicles is determined using an algorithm that includes two stages of detection and prediction. The more the number of frames in which the detection network is used, the more accurate the detector is, and the more the prediction network is used, the algorithm is faster. Therefore, the algorithm is very flexible to achieve the required accuracy and speed. YOLO's base detection network is designed to be robust against vehicle scale changes. Also, feature maps are produced in the detector network, which contribute greatly to increasing the accuracy of the detector. In these maps, using differential images and a u-net-based module, image segmentation has been done into two classes: vehicle and background. To increase the accuracy of the recursive predictive network, vehicle manoeuvres are classified. For this purpose, the spatial and temporal information of the vehicles are considered simultaneously. This classifier is much more effective than classifiers that consider spatial and temporal information separately. The Highway and UA-DETRAC datasets demonstrate the performance of the proposed algorithm in urban traffic monitoring systems. © 2023 The Authors. IET Computer Vision published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
Visual Computer (14322315)40(10)pp. 6825-6841
An anomaly is a pattern, behavior, or event that does not frequently happen in an environment. Video anomaly detection has always been a challenging task. Home security, public area monitoring, and quality control in production lines are only a few applications of video anomaly detection. The spatio-temporal nature of the videos, the lack of an exact definition for anomalies, and the inefficiencies of feature extraction for videos are examples of the challenges that researchers face in video anomaly detection. To find a solution to these challenges, we propose a method that uses parallel deep structures to extract informative features from the videos. The method consists of different units including an attention unit, frame sampling units, spatial and temporal feature extractors, and thresholding. Using these units, we propose a video anomaly detection that aggregates the results of four parallel structures. Aggregating the results brings generality and flexibility to the algorithm. The proposed method achieves satisfying results for four popular video anomaly detection benchmarks. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024.