Department of Artificial Intelligence Engineering The Department of artificial intelligence engineering is a leading center for education and research in artificial intelligence engineering. With expert faculty, modern facilities, and a strong focus on innovation, we prepare students for successful careers and academic excellence. Join us and be part of a dynamic learning community shaping the future.
filter by: Publication Year
(Descending) Articles
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (15206149) 5pp. 309-312
This paper introduces an efficient algorithm that jointly estimates differential time delays and frequency offsets between two signals is introduced. The approach is a two-step procedure. First, the differential frequency offsets are estimated from measurement of the autocorrelation functions of the received and transmitted signals. The time delays are estimated from estimates of the higher-order statistics of the two signals involved. The major advantage of the approach is its remarkably reduced computational complexity over traditional approaches. The experimental results indicate that the algorithm performs better than the traditional methods in most cases of interest in spite of its reduced computational complexity. © 1992 IEEE.
In this paper we propose a new deinterlacing algorithm using motion compensation and directional interpolation. To limit the propagation error that is a major drawback of conventional motion compensated methods, motion estimation is performed using original lines only, for same and opposite parity fields. In addition, a threshold value is used during the search to recognize situations where the motion estimator fails to find an optimal matching block. Enhanced edge-based line average with median filtering is used in these situations. Experimental results show that the proposed method performs better than the traditional motion compensated method, based on objective and subjective criteria. © 2006 IEEE.
Electronic Transactions on Numerical Analysis (10689613) 23pp. 251-262
A new and efficient sine-convolution algorithm is introduced for the numerical solution of the radiosity equation. This equation has many applications including the production of photorealistic images. The method of sine-convolution is based on using collocation to replace multi-dimensional convolution-type integrals - such as two dimensional radiosity integral equations - by a system of algebraic equations. The developed algorithm solves for the illumination of a surface or a set of surfaces when both reflectivity and emissivity of those surfaces are known. It separates the radiosity equation's variables to approximate its solution. The separation of variables allows the elimination of the formulation of huge full matrices and therefore reduces required storage, as well as computational complexity, as compared with classical approaches. Also, the highly singular nature of the kernel, which results in great difficulties using classical numerical methods, poses absolutely no difficulties using sine-convolution. In addition, the new algorithm can be readily adapted for parallel computation for an even faster computational speed. The results show that the developed algorithm clearly reveals the color bleeding phenomenon which is a natural phenomenon not revealed by many other methods. These advantages should make real-time photorealistic image production feasible. Copyright © 2006, Kent State University.
This paper proposes optimization techniques to accelerate the enhanced edge-based line average (ELA) deinterlacing method. ELA is based on edge detection and directional interpolation as well as median filtering. The techniques are first based on low-level software optimizations to accelerate loops and arithmetic operations. Specialized hardware structures and corresponding new instructions are then defined for the Xtensa reconfigurable processor to accelerate ELA-specific operations. The combined software and hardware techniques result in a speed-up of 67× when compared to a base case. This accelerates the processing time from 25 times slower than real time to 2.7 times faster for a NTSC frame rate. A parallel processing version of ELA is also discussed. © 2006 IEEE.
IEEE Transactions on Consumer Electronics (00983063) 53(3)pp. 1117-1124
A new motion compensated deinterlacing method using forward and backward motion estimation is proposed in this paper. Bi-directional motion estimation is performed using two previous and two subsequent fields. The motion estimator uses pre-filtering prior to motion estimation for the current and the subsequent two fields. The motion estimator finds a single optimal matching block in the same or opposite parity reference fields. Motion compensation is done according to the amount of vertical motion within the reference fields to achieve the highest vertical resolution improvement. A novel technique to prevent the appearance of visual artifacts in the presence of fast-moving objects is proposed. Experimental results show that the proposed method performs better than the conventional deinterlacing methods, based on objective and subjective criteria. © 2007 IEEE.
Block matching has been widely used for block motion estimation; however most of the block matching algorithms impose heavy computational load to the system, and require much time for execution. This problem prevents using them in time critical applications. In this paper, a new approach to block matching technique is presented, which has small computational complexity as well as high accuracy. The main assumption of the algorithm is that, all the pixels of a block move equally by a linear motion. Experimental results show the feasibility and effectiveness of the proposed algorithm. © 2008 IEEE.
In this paper, a new robust digital image watermarking algorithm based on Joint DWT-DCT Transformation is proposed. The imperceptibility is provided as well as higher robustness against common signal processing attacks. A binary watermarked image is embedded in certain sub-bands of a 3-level DWT transformed of a host image. Then, DCT transform of each selected DWT sub-band is computed and the PN-sequences of the watermark bits are embedded in the coefficients of the corresponding DCT middle frequencies. In extraction stages, the watermarked image, which maybe attacked, is first preprocessed by sharpening and Laplassian of Gaussian filters. Then, the same approach as the embedding process is used to extract the DCT middle frequencies of each sub-band. Finally, correlation between mid-band coefficients and PN-sequences is calculated to determine watermarked bits. Experimental results show that the proposed method improved the performance of the watermarking algorithms which are based on the joint of DWT-DCT. © 2008 IEEE.
This paper presents a new robust digital image watermarking technique based on Discrete Cosine Transform (DCT) and neural network. The neural network is Full Counter propagation Neural Network (FCNN). FCNN has been used to simulate the perceptual and visual characteristics of the original image. The perceptual features of the original image have been used to determine the highest changeable threshold values of DCT coefficients. The highest changeable threshold values have been used to embed the watermark in DCT coefficients of the original image. The watermark is a binary image. The pixel values of this image are inserted as zero and one values in the DCT coefficients of the image. The implementation results have shown that this watermarking algorithm has an acceptable robustness versus different kinds of watermarking attacks. © 2008 IEEE.
Background estimation is one of the most challenging phases in extracting foreground objects from video sequences. In this paper we present a background modeling approach that uses the similarity of frames to extract background areas from the video sequence. We use a window over the frames history and compute the similarity between the selected frames of this window as a similarity window. The properties of similarity window depend on the characteristics of the scene and can be adjusted parametrically. Our primary results show that if proper parameters are chosen, this method can give a good approximation of the background model. ©2008 IEEE.
Eurasip Journal on Advances in Signal Processing (16876172) 2008
An electrocardiogram (ECG) beat classification scheme based on multiple signal classification (MUSIC) algorithm, morphological descriptors, and neural networks is proposed for discriminating nine ECG beat types. These are normal, fusion of ventricular and normal, fusion of paced and normal, left bundle branch block, right bundle branch block, premature ventricular concentration, atrial premature contraction, paced beat, and ventricular flutter. ECG signal samples from MIT-BIH arrhythmia database are used to evaluate the scheme. MUSIC algorithm is used to calculate pseudospectrum of ECG signals. The low-frequency samples are picked to have the most valuable heartbeat information. These samples along with two morphological descriptors, which deliver the characteristics and features of all parts of the heart, form an input feature vector. This vector is used for the initial training of a classifier neural network. The neural network is designed to have nine sample outputs which constitute the nine beat types. Two neural network schemes, namely multilayered perceptron (MLP) neural network and a probabilistic neural network (PNN), are employed. The experimental results achieved a promising accuracy of 99.03 for classifying the beat types using MLP neural network. In addition, our scheme recognizes NORMAL class with 100 accuracy and never misclassifies any other classes as NORMAL.
The reliable execution of a mobile agent is a very important design issue in building a mobile agent system and many fault-tolerant schemes have been proposed so far. Security is a major problem of mobile agent systems, especially when money transactions are concerned . Security for the partners involved is handled by encryption methods based on a public key authentication mechanism and by secret key encryption of the communication. In this paper, we examine qualitatively the security considerations and challenges in application development with the mobile code paradigm. We identify a simple but crucial security requirement for the general acceptance of the mobile code paradigm, and evaluate the current status of mobile code development in meeting this requirement. We find that the mobile agent approach is the most interesting and challenging branch of mobile code in the security context. Therefore, we built a simple agent-based information retrieval application, the Traveling Information Agent system, and discuss the security issues of the system in particulars. ©2008 IEEE.
Journal of the Chinese Institute of Engineers, Transactions of the Chinese Institute of Engineers,Series A/Chung-kuo Kung Ch'eng Hsuch K'an (02533839) 31(4)pp. 649-657
In recent years, Active Contour Models (ACMs) have become powerful tools for object detection and image segmentation in computer vision and image processing applications. This paper presents a new energy function in parametric active contour models for object detection and image segmentation. In the proposed method, a new pressure energy called “texture pressure energy” is added to the energy function of the parametric active contour model to detect and segment a textured object against a textured background. In this scheme, the texture features of the contour are calculated by a moment based method. Then by comparing these features with texture features of the object, the contour curve is expanded or contracted in order to be adapted to the object boundaries. Experimental results show that the proposed method has more efficient and accurate segmenting functionality than the traditional method when both object and background have texture properties. © 2008, Taylor & Francis Group, LLC.
European Signal Processing Conference (22195491) pp. 2377-2381
A new approach based on root-MUSIC frequency estimation method and a Multiple Layer Perceptron neural network is introduced. In this method, a feature vector is formed using power frequency, entropy, standard deviation, as well as the complexity of the time domain Electroencephalography (EEG) signal. The power frequency values are estimated using root-MUSIC algorithm. The resulted feature vector is then classified into three categories namely healthy, interictal (epileptic during seizure-free interval), and ictal (full epileptic condition during seizure interval) states using Multiple Layer Perceptron Neural Network (MLPNN). The experimental results show that EEG states classification maybe achieved with approximately 94.53% accuracy and variance of 0.063% applying the method on an available public database. This is a high speed with high accuracy as well as low misclassification rate method. © EURASIP, 2009.
Bayesian Optimization Algorithm (BOA) has been used with different local structures to represent more complex models and a variety of scoring metrics to evaluate Bayesian network. But the combinatorial effects of these elements on the performance of BOA have not been investigated yet. In this paper the performance of BOA is studied using two criteria: Number of fitness evaluations and structural accuracy of the model. It is shown that simple exact local structures like CPT in conjunction with complexity penalizing BIC metric outperforms others in terms of model accuracy. But considering number of fitness evaluations (efficiency) of the algorithm, CPT with other complexity penalizing metric K2P performs better. Copyright 2009 ACM.
We present techniques used to create a high performance application-specific instruction-set processor (ASIP) implementation of the Pattern-Based Directional Interpolation (PBDI) intra-field deinterlacing algorithm. The proposed techniques focus primarily on an efficient utilization of the available memory bandwidth. They include the use of Very Long Instruction Words (VLIW) and an appropriate choice of custom instructions and application-specific registers in order to form a processing pipeline. We report a speedup factor of 1351 in comparison with a software-only implementation of the algorithm running on a general-purpose 32-bit RISC processor.
Estimation of distribution algorithms, especially those using Bayesian network as their probabilistic model, have been able to solve many challenging optimization problems, including the class of hierarchical problems, competently. Since model-building constitute an important part of these algorithms, finding ways to improve the quality of the models built during optimization is very beneficial. This in turn requires mechanisms to evaluate the quality of the models, as each problem has a large space of possible models. The efforts in this field are mainly concentrated on single-level problems, due to complex structure of hierarchical problems which makes them hard to treat. In order to extend model analysis to hierarchical problems, a model evaluation algorithm is proposed in this paper which can be applied to different problems. The results of applying the algorithm to two common hierarchical problems are also mentioned and described. ©2009 IEEE.
International Journal of Imaging Systems and Technology (10981098) 19(3)pp. 179-186
In recent years, active contour models (ACM) have been considered as powerful tools for image segmentation and object tracking in computer vision and image processing applications. This article presents a new tracking method based on parametric active contour models. In the proposed method, a new pressure energy called "texture pressure energy" is added to the energy function of the parametric active contour model to detect and track a texture target object in a texture background. In this scheme, the texture features of the contour are calculated by a moment-based method. Then, by comparing these features with texture features of the target object, the contour curve is expanded or contracted to be adapted to the object boundaries. Experimental results show that the proposed method is more efficient and accurate in the tracking of objects compare to the traditional ones, when both object and background are textures in nature. © 2009 Wiley Periodicals, Inc.
Journal of Circuits, Systems and Computers (17936454) 19(2)pp. 451-477
In this paper, a novel watermarking technique based on parametric slant-Hadamard transform is presented. Our approach embeds a pseudo-random sequence of real numbers in a selected set of the parametric slant-Hadamard transform coefficients. By exploiting statistical properties of the embedded sequence, the mark can be reliably extracted without resorting to the original uncorrupted image. The presented method is capable of increasing the flexibility of the watermarking scheme, where the changes in parameter set help to improve fidelity and robustness against a number of attacks. Experimental results show that the proposed technique is secure and indeed highly robust to these attacks. © 2010 World Scientific Publishing Company.
Biomedical Signal Processing and Control (17468108) 5(2)pp. 147-157
In this paper, a new approach based on eigen-systems pseudo-spectral estimation methods, namely Eigenvector (EV) and MUSIC, and Multiple Layer Perceptron (MLP) neural network is introduced. In this approach, the calculated EEG (electroencephalogram) spectrum is divided into smaller frequency sub-bands. Then, a set of features, {maximum, entropy, average, standard deviation, mobility}, are extracted from these sub-bands. Next, incorporating a set of the EEG time domain features {standard deviation, complexity measure} with the spectral feature set, a feature vector is formed. The feature vector is then fetched into a MLP neural network to classify the signal into the following three states: normal (healthy), epileptic patient signal in a seizure-free interval (inter-ictal), and epileptic patient signal in a full seizure interval (ictal). The experimental results show that the classification of the EEG signals maybe achieved with approximately 97.5% accuracy and the variance of 0.095% using an available public EEG signals database. The results are among the best reported methods for classifying the three states aforementioned. This is a high speed with high accuracy as well as low misclassifying rate method so it can make the practical and real-time detection of this chronic disease feasible. © 2010 Elsevier Ltd. All rights reserved.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (03029743) 6594(PART 2)pp. 98-107
N-grams are the basic features commonly used in sequence-based malicious code detection methods in computer virology research. The empirical results from previous works suggest that, while short length n-grams are easier to extract, the characteristics of the underlying executables are better represented in lengthier n-grams. However, by increasing the length of an n-gram, the feature space grows in an exponential manner and much space and computational resources are demanded. And therefore, feature selection has turned to be the most challenging step in establishing an accurate detection system based on byte n-grams. In this paper we propose an efficient feature extraction method where in order to gain more information; both adjacent and non-adjacent bi-grams are used. Additionally, we present a novel boosting feature selection method based on genetic algorithm. Our experimental results indicate that the proposed detection system detects virus programs far more accurately than the best earlier known methods. © 2011 Springer-Verlag.
Scientific Research and Essays (19922248) 6(10)pp. 2119-2128
Digital image watermarking is one of the most important techniques for copyright protection. The robustness and imperceptibility are the basic requirements of digital image watermarking that are contradictory. The key factor that affects both the robustness and imperceptibility is the watermark strength. This paper presents a new method to determine the watermark strength using Reinforcement Learning (RL) in Discrete Cosine Transform (DCT) domain. Thus, finding the watermark strength was formulated as an RL problem. In our study, the defined reinforcement function has two contradictory aspects, the one with positive aspect is with respect to the similarities between the host and watermarked image and the other with negative aspect is with respect to the robustness of the watermark. Therefore, a novel adaptive methodology is introduced to estimate watermark strength to ameliorate both imperceptibility and robustness at the same time. The experimental results show that the proposed RL algorithm for watermark strength estimation improves simultaneously the robustness and imperceptibility of the watermarking scheme. © 2011 Academic Journals.
Journal of Circuits, Systems and Computers (17936454) 20(5)pp. 801-819
In this paper, an adaptive digital image watermarking technique using fuzzy gradient on DCT domain is presented. In our approach, the image is divided into separate blocks and the DCT is applied on each block individually. Then, the watermark is inserted in the transform domain and the inverse transform is carried out. We increase the robustness of the watermark by increasing the watermark strength. However, this reduces the fidelity of the watermarking scheme. This is because the fidelity and robustness of watermarking are generally in conflict with each other. To improve the fidelity, a new fuzzy-based method is introduced. In this method, a fuzzy gradient-based mask is generated from the host image. Then, as a post-processing stage, the generated mask is combined with the watermarked image. Experimental results show that the proposed technique has high fidelity as well as high robustness against a variety of attacks. © 2011 World Scientific Publishing Company.
International Journal of Pattern Recognition and Artificial Intelligence (02180014) 25(1)pp. 1-35
One problem in background estimation is the inherent change in the background such as waving tree branches, water surfaces, camera shakes, and the existence of moving objects in every image. In this paper, a new method for background estimation is proposed based on function approximation in kernel domain. For this purpose, Weighted Kernel-based Learning Algorithm (WKLA) is designed. WKLA includes a weighted type of kernel least mean square algorithm with ability to function approximation in the presence of noise. So, the proposed background estimation method includes two stages: firstly, a novel algorithm for outlier detection namely Fuzzy Outlier Detector (FOD) is applied. Then obtained results are fed to the WKLA. The proposed approach can handle scenes containing moving backgrounds, gradual illumination changes, camera vibrations, and non-empty backgrounds. The qualitative results and quantitative evaluations on various indoor and outdoor sequences relative to existing approaches show the high accuracy and effectiveness of the proposed method in background estimation and foreground detection. © 2011 World Scientific Publishing Company.
IET Image Processing (17519667) 5(7)pp. 611-618
This study proposes a new hybrid video deinterlacing algorithm method featuring a novel approach to qualify the reliability of motion vectors. The algorithm switches between motion-compensated and enhanced edge-based line averaging (ELA) methods based on motion vector reliability. When the motion vectors are calculated, reverse motion estimation (RME) is applied to the optimal matching block. A motion vector is assumed reliable if the result of RME refers to the original block or to a block in its vicinity. Motion compensation is used when motion vectors are reliable to improve the vertical resolution and enhanced ELA is used when the motion vectors are not reliable to prevent artefacts. Experimental results show that RME performs better than previous approaches, based on objective and subjective criteria. The computational complexity of the proposed method is up to two orders of magnitude less than previous methods, while the quality of the output compares well with the best previously reported methods. © 2011 The Institution of Engineering and Technology.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (03029743) 6576pp. 298-312
The objective values information can be incorporated into the evolutionary algorithms based on probabilistic modeling in order to capture the relationships between objectives and variables. This paper investigates the effects of joining the objective and variable information on the performance of an estimation of distribution algorithm for multi-objective optimization. A joint Gaussian Bayesian network of objectives and variables is learnt and then sampled using the information about currently best obtained objective values as evidence. The experimental results obtained on a set of multi-objective functions and in comparison to two other competitive algorithms are presented and discussed. © 2011 Springer-Verlag.
This paper shows that statistical algorithms proposed for the quantitative trait loci (QTL) mapping problem, and the equation of the multivariate response to selection can be of application in multi-objective optimization. We introduce the conditional dominance relationships between the objectives and propose the use of results from QTL analysis and G-matrix theory to the analysis of multi-objective evolutionary algorithms (MOEAs). © 2011 Authors.
K-order Markov models have been introduced to estimation of distribution algorithms (EDAs) to solve a particular class of optimization problems in which each variable depends on its previous k variables in a given, fixed order. In this paper we investigate the use of regularization as a way to approximate k-order Markov models when k is increased. The introduced regularized models are used to balance the complexity and accuracy of the k-order Markov models. We investigate the behavior of the EDAs in several instances of the hydrophobic-polar (HP) protein problem, a simplified protein folding model. Our preliminary results show that EDAs that use regularized approximations of the k-order Markov models offer a good compromise between complexity and efficiency, and could be an appropriate choice when the number of variables is increased. Copyright 2011 ACM.
Journal of Heuristics (13811231) 18(5)pp. 795-819
Thanks to their inherent properties, probabilistic graphical models are one of the prime candidates for machine learning and decision making tasks especially in uncertain domains. Their capabilities, like representation, inference and learning, if used effectively, can greatly help to build intelligent systems that are able to act accordingly in different problem domains. Evolutionary algorithms is one such discipline that has employed probabilistic graphical models to improve the search for optimal solutions in complex problems. This paper shows how probabilistic graphical models have been used in evolutionary algorithms to improve their performance in solving complex problems. Specifically, we give a survey of probabilistic model building-based evolutionary algorithms, called estimation of distribution algorithms, and compare different methods for probabilistic modeling in these algorithms. © Springer Science+Business Media, LLC 2012. © Springer Science+Business Media, LLC 2012.
Artificial Organs (15251594) 36(7)pp. 616-628
This article presents an image processing approach dedicated for a blind mobility aid facilitated through visual intracortical electrical stimulation. The method examines a display framework based on the distances related to a scene. The distances of objects to the walker are measured using a size perspective method which uses only one camera without any occlusion effect. The method extracts the information of the closest object to the camera and transfers a sense of distance to a blind walker. The proposed image processing method can estimate the distances of objects within 7.5m of the walker, and alert the presence of the closest object to the person. This new method offers the advantages of information reduction and scene understanding suitable for visual prosthesis. © 2012, the Authors. Artificial Organs © 2012, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Adaptation, Learning, and Optimization (18674542) 14(1)pp. 157-173
Because of their intrinsic properties, the majority of the estimation of distribution algorithms proposed for continuous optimization problems are based on the Gaussian distribution assumption for the variables. This paper looks over the relation between the general multivariate Gaussian distribution and the popular undirected graphical model of Markov networks and discusses how they can be employed in estimation of distribution algorithms for continuous optimization. A number of learning and sampling techniques for thesemodels, including the promising regularized model learning, are also reviewed and their application for function optimization in the context of estimation of distribution algorithms is studied. © Springer-Verlag Berlin Heidelberg 2012.
1 2 3 ... 5