Menu
The Dean’s Office The Education Department is a core unit within the faculty, responsible for planning, organizing, and overseeing educational activities. It works closely with academic staff to design and update course curricula, coordinate class schedules, and enhance the overall quality of teaching. The department aims to provide a supportive environment for effective learning and the academic development of students. It also plays a key role in academic advising, addressing educational concerns, and organizing consultation sessions. By applying modern teaching methods and responding to current educational needs, the Education Department strives to improve the learning process and contribute to student success.
profile
more...
Top researchers
Research Output
Articles
Publication Date: 2011
pp. 77-86
This paper presents a novel image security system based on the replacement of the pixel values using recursive Cellular automata (CA) substitution. The advantages of our proposed method are that it is computationally efficient and it is reasonably passing sensibility analysis tests. The proposed method is carried out by using one half of image data to encrypt the other half of the image mutually. Our algorithm can encrypt image in parallel and be also applied to color image encryption. In this proposed method, size of key is dynamic and by changing a bit of security key the image cannot retrieve because our method is key sensitive. Simulation results obtained using color; white-black and gray-level images demonstrate the good performance of the proposed image security system. © 2011 IEEE.
Publication Date: 2012
International Journal of Bio-Inspired Computation (17580374) 4(3)pp. 181-195
High volumes of low-level alerts that are generated by intrusion detection systems (IDSs) are serious obstacle for using them effectively. These high volumes of alerts overwhelm system administrators in such a way that they cannot manage and interpret them. Alert correlation is used to reduce the number of alerts and increase their level of abstraction. It selects a group of low-level alerts and converts them into a higher level attack and then produces a high-level alert for them. In this paper, a new artificial immune system-based alert correlation system is presented, named AISAC. It learns the correlation probability between each pair of alert types and uses this knowledge to extract the attack scenarios. AISAC does not need intensive domain knowledge and rule definition efforts. It also does not need to manually update the extracted knowledge. The computational cost of learning algorithm is linear, and the initial learning is done by a very limited general data in offline mode. AISAC is evaluated by DARPA 2000 and net Forensics Honeynet data. Results show that although it uses a relatively simple algorithm, it generates the attack graphs with acceptable accuracy. © 2012 Inderscience Enterprises Ltd.
Bateni, M. ,
Baraani, A. ,
Ghorbani, A.A. ,
Rezaei, A. Publication Date: 2013
International Journal of Innovative Computing, Information and Control (13494198) 9(1)pp. 231-255
There are many different approaches to alert correlation such as using correlation rules and prerequisite-consequences, using machine learning and statistical methods and using similarity measures. In this paper, iCorrelator, a new AIS-inspired architecture, is presented. It uses a three-layer architecture that is inspired by three types of responses in the human immune system: the innate immune system's response, the adaptive immune system's primary response, and the adaptive immune system's secondary response. In comparison with other correlators, iCorrelator does not need information about different attacks and their possible relations in order to discover an attack scenario. It uses a very limited number of general rules that are not related to any specific attack scenario. A process of incremental learning is used to encounter new attacks. Therefore, iCorrelator is easy to set up and work dynamically without reconfiguration. As a result of using memory cells and improved alert selection policy, the computational cost of iCorrelator is also acceptable even for online correlation. iCorrelator is evaluated by using the DARPA 2000 dataset and a netForensics honeynet data. The completeness, soundness, false correlation rate and execution time are reported. Results show that iCorrelator is able to extract the attack graphs with acceptable accuracy that is comparable to the best known solutions. © 2013 ICIC International.
Publication Date: 2013
International Journal of Network Security (discontinued) (1816353X) 15(3)pp. 160-174
One of the most important challenges facing the intrusion detection systems (IDSs) is the huge number of generated alerts. A system administrator will be overwhelmed by these alerts in such a way that she/he cannot manage and use the alerts. The best-known solution is to correlate low-level alerts into a higher level attack and then produce a high-level alert for them. In this paper a new automated alert correlation approach is presented. It employs Fuzzy Logic and Artificial Immune System (AIS) to discover and learn the degree of correlation between two alerts and uses this knowledge to extract the attack scenarios. The proposed system doesn't need vast domain knowledge or rule definition e®orts. To correlate each new alert with previous alerts, the system first tries to find the correlation probability based on its fuzzy rules. Then, if there is no matching rule with the required matching threshold, it uses the AIRS algorithm. The system is evaluated using DARPA 2000 dataset and a netForensics honeynet data. The completeness, soundness and false alert rate are calculated. The average completeness for LL-DoS1.0 and LLDoS2.0, are 0.957 and 0.745 respectively. The system generates the attack graphs with an acceptable accuracy and, the computational complexity of the probability assignment algorithm is linear.
Publication Date: 2015
Malaysian Journal Of Computer Science (01279084) 28(1)pp. 46-58
Image reconstruction is an important part of computed tomography imaging systems, which converts the measured data into images. Because of high computational cost and slow convergence of iterative reconstruction algorithms, these methods are not widely used in practice. In this paper, we propose a hybrid iterative algorithm by combining multigrid method,Tikhonov regularization and Simultaneous Iterative Reconstruction Technique (SIRT) for reconstruction of the computed tomography image that reduces this drawback. To do so, we reduce the time and the volume of computations considerably by finding astable and appropriate starting point. The experimental results indicate that the proposed iterative algorithm has more rapid convergence and reconstructs high quality images in short computational time than the classical ones.
Publication Date: 2016
Scalable Computing (18951767) 17(4)pp. 331-349
Nested loops are one of the most time-consuming parts and the largest sources of parallelism in many scientific applications. In this paper, we address the problem of 3-dimensional tiling and scheduling of three-level perfectly nested loops with dependencies on heterogeneous systems. To exploit the parallelism, we tile and schedule nested loops with dependencies by awareness of computational power of the processing nodes and execute them in pipeline mode. The tile size plays an important role to improve the parallel execution time of nested loops. We develop and evaluate a theoretical model to estimate the parallel execution time of tilled nested loops. Also, we propose a tiling genetic algorithm that used the proposed model to find the near-optimal tile size, minimizing the parallel execution time of dependence nested loops. We demonstrate the accuracy of theoretical model and effectiveness of the proposed tiling genetic algorithm by several experiments on heterogeneous systems. The 3D tiling reduces the parallel execution time by a factor of 1.2× to 2× over the 2D tiling, while parallelizing 3D heat equation as a benchmark. © 2016 SCPE.
Publication Date: 2017
Concurrency and Computation: Practice and Experience (15320626) 29(5)
Nested loops are the largest source of parallelism in many data-parallel scientific applications. Heterogeneous distributed systems are popular computing platforms for data-parallel applications. Data partitioning is critical in exploiting the computational power of such systems, and existing data partitioning algorithms try to maximize performance of data-parallel applications by finding a data distribution that balances the workload between the processing nodes while minimizing communication costs. This paper addresses the problem of 3-dimensional data partitioning for 3-level perfectly nested loops on heterogeneous distributed systems. The primary aim is to minimize the execution time by improving the load balancing and minimizing the internode communications. We propose a new data partitioning algorithm using dynamic programming, build a theoretical model to estimate the execution time of each partition, and select a partition with minimum execution time as a near-optimal solution. We demonstrate the effectiveness of the new algorithm for 2 data-parallel scientific applications on heterogeneous distributed systems. The new algorithm reduces the execution time by between 7% and 17%, on average, compared with leading data partitioning methods on 3 heterogeneous distributed systems. Copyright © 2016 John Wiley & Sons, Ltd.
Publication Date: 2017
2018pp. 1-6
Modern GPUs employ simultaneous kernel executions (SKE), an equivalent to multitasking in CPUs, to maximize the hardware utilization and enhance the resulted performance. SKE paradigm is not yet fully explored by the research community. In this study, a reuse-distance (RD) based analysis approach, called SKERD, is proposed to analyze the effect of SKE scenarios on the kernel data reuse and GPU cache memories performance. Only two simultaneous kernels were considered in this work. Moreover, Three types of coarse-grained SM (streaming multiprocessor) partitioning schemes were investigated including an even SM to kernel partitioning and two SM partitioning schemes that assign the SMs to the kernels based on the kernel workloads. The simulation results show that none of the mentioned partitioning schemes always functions better than the others. Further, for some memory intensive kernels, SKE resulted in cache contentions and hit ratio degradation. Consequently, the effects of SKE on cache memories should be carefully considered. © 2017 IEEE.
Publication Date: 2017
2017pp. 260-265
Performance modeling plays an important role for optimal hardware design and optimized application implementation. This paper presents a very low overhead performance model, called VLAG, to approximate the data localities exploited by GPU kernels. VLAG receives source code-level information to estimate per memory-access instruction, per data array, and per kernel localities within GPU kernels. VLAG is only applicable to kernels with regular memory access patterns. VLAG was experimentally evaluated using an NVIDiA Maxwell GPU. For two different Matrix Multiplication kernels, the average errors of 7.68% and 6.29%, was resulted, respectively. The slowdown of VLAG for MM was measured 1.4X which, comparing with other approaches such as trace-driven simulation, is negligible. © 2017 IEEE.
Publication Date: 2019
International Journal of Computer Network and Information Security (20749104) 11(9)pp. 9-17
Malware poses one of the most serious threats to computer information systems. The current detection technology of malware has several inherent constraints. Because signature-based traditional techniques embedded in commercial antiviruses are not capable of detecting new and obfuscated malware, machine learning algorithms are applied in identifing patterns of malware behavior through features extracted from programs. There, a method is presented for detecting malware based on the features extracted from the PE header and section table PE files. The packed files are detected and then unpacke them. The PE file features are extracted and their static features are selected from PE header and section tables through forward selection method. The files are classified into malware files and clean files throughs different classification methods. The best results are obtained through DT classifier with an accuracy of 98.26%. The results of the experiments consist of 971 executable files containing 761 malware and 210 clean files with an accuracy of 98.26%. © 2019 MECS.
more... 35 University of Isfahan
Address: Isfahan, Azadi Square, University of Isfahan