مرتب سازی بر اساس: سال انتشار
(نزولی)
Digital Communications and Networks (23528648) 11(2)pp. 574-586
In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, commonly used in data transmission protocols, increases transmission delay and consumes excessive bandwidth. To overcome this issue, forward error correction techniques, e.g., Random Linear Network Coding (RLNC) can be used in data transmission. The primary challenge in RLNC-based methodologies is sustaining a consistent coding ratio during data transmission, leading to notable bandwidth usage and transmission delay in dynamic network conditions. Therefore, this study proposes a new block-based RLNC strategy known as Adjustable RLNC (ARLNC), which dynamically adjusts the coding ratio and transmission window during runtime based on the estimated network error rate calculated via receiver feedback. The calculations in this approach are performed using a Galois field with the order of 256. Furthermore, we assessed ARLNC's performance by subjecting it to various error models such as Gilbert Elliott, exponential, and constant rates and compared it with the standard RLNC. The results show that dynamically adjusting the coding ratio and transmission window size based on network conditions significantly enhances network throughput and reduces total transmission delay in most scenarios. In contrast to the conventional RLNC method employing a fixed coding ratio, the presented approach has demonstrated significant enhancements, resulting in a 73% decrease in transmission delay and a 4 times augmentation in throughput. However, in dynamic computational environments, ARLNC generally incurs higher computational costs than the standard RLNC but excels in high-performance networks. © 2024 Chongqing University of Posts and Telecommunications
Journal of Engineering Mathematics (15732703) 151(1)
This paper introduces a novel method for calculating the inverse Z-transform of rational functions. Unlike some existing approaches that rely on partial fraction expansion and involve dividing by z, the proposed method allows for the direct computation of the inverse Z-transform without such division. Furthermore, this method expands the rational functions over real numbers instead of complex numbers. Hence, it does not need algebraic manipulations to obtain a real-valued answer. Furthermore, it aligns our method more closely with established techniques used in integral, Laplace, and Fourier transforms. In addition, it can lead to fewer calculations in some cases. © The Author(s), under exclusive licence to Springer Nature B.V. 2025.
Computing (14365057) 107(6)
With the advancement of the Internet of Things (IoT) and the changing needs of edge computing applications within the TCP/IP architecture, several challenges have emerged. One solution to these challenges is to integrate edge computing with information-centric networks (ICN). In ICN-based edge computing, there is a high level of similarity in request computing due to the proximity of users, which is leveraged to improve the efficiency of computation reuse. Computation reuse occurs through naming, caching, and forwarding. Computation reuse through forwarding means that similar requests are directed to the same compute node (CN). In many past works, forwarding algorithms for computation reuse have been used with high overhead for resource discovery or did not consider the important criterion of assessing the capacity of CNs. In this paper, we propose two forwarding algorithms, named TLCF) Trade-Off Between Load Balancing and Computation Reuse Forwarding) and AFCT (Adaptive Forwarding Considering Capacity Threshold), that measures criteria for selecting the best CN, the trade-off between computation reuse and load balancing, while also considering capacity. These two aspects lead to a reduction in completion time. Computation reuse inherently disrupts load balancing. The evaluation was conducted using the ndnSIM simulation. Through simulations, we have shown that our method significantly reduces completion time compared to the default method, achieving an improvement of approximately 22%. These findings highlight the efficiency and potential of our proposed method in optimizing edge computing performance. © The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature 2025.
Journal of Supercomputing (15730484) 80(9)pp. 12273-12296
One of the most important strategies used to mitigate the adverse impacts of traffic growth on mobile networks is caching. By caching at the edge, the backhaul traffic load is reduced, and the quality of service for the user is increased. Developing an effective caching algorithm requires accurate prediction of the future popularity of the content, which is a challenging issue. In recent years, deep learning models have achieved high predictive accuracy due to advancements in data availability and increased computing power. In this paper, we present a caching algorithm called the user preference-aware content caching algorithm (UPACA). This algorithm is specifically designed for an edge content delivery platform where users can access content services provided by a remote content provider. UPACA operates in two steps. In the first step, the proposed collaborative filtering-based popularity prediction algorithm (CFPA) is used to predict future content popularities. CFPA utilizes a gated residual variational autoencoder collaborative filtering model to predict users’ future preferences and calculate the future popularity of content. This algorithm considers the popularity of the content as well as the number and timing of content requests. Experimental results demonstrate that UPACA outperforms previous methods in terms of cache hit rates and user utilities. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
IEEE Internet of Things Journal (23274662) 11(7)pp. 12815-12822
Edge-Cloud Computing Industrial Internet of Things (ECIIoT) is composed of edge and cloud nodes with Industrial Internet of Things (IIoT) devices to get the service function chain (SFC). The service function chaining placement refers to a series of virtual network functions (VNFs) that are run at edge or cloud nodes in the form of software instances. In the problem of ECIIoT service embedding, the multiple VNFs must be placed for IIoT devices, so how these virtual functions are placed at cloud or edge nodes to minimize the delay is challenging to achieve. In this article, the placement of virtual functions with considering the edge and cloud nodes is proposed. In our model, the cloud server with edge nodes can run the required functions of IIoT devices in the SFC to decrease the imposed delay and use the computation resource in an efficient way. This is formed as an optimization problem to minimize the delay and residual computing resource consumption and reuse the previous functions. The exact solution of this problem is not available in polynomial time, therefore an efficient approximation algorithm is proposed which solves the problem in three stages. First, it linearizes the nonlinear objective function and constraint and approximates them by the convexity of these functions. Then, it solves the relaxed linear problem and finally, it rounds the decision variables in a heuristic way. This solution not only has polynomial time computational complexity but also obtains the near-optimal solution. The simulation results confirm the effectiveness of this approach. © 2014 IEEE.
Computing (14365057) 106(9)pp. 2949-2969
In edge computing, repetitive computations are a common occurrence. However, the traditional TCP/IP architecture used in edge computing fails to identify these repetitions, resulting in redundant computations being recomputed by edge resources. To address this issue and enhance the efficiency of edge computing, Information-Centric Networking (ICN)-based edge computing is employed. The ICN architecture leverages its forwarding and naming convention features to recognize repetitive computations and direct them to the appropriate edge resources, thereby promoting “computation reuse”. This approach significantly improves the overall effectiveness of edge computing. In the realm of edge computing, dynamically generated computations often experience prolonged response times. To establish and track connections between input requests and the edge, naming conventions become crucial. By incorporating unique IDs within these naming conventions, each computing request with identical input data is treated as distinct, rendering ICN’s aggregation feature unusable. In this study, we propose a novel approach that modifies the Content Store (CS) table, treating computing requests with the same input data and unique IDs, resulting in identical outcomes, as equivalent. The benefits of this approach include reducing distance and completion time, and increasing hit ratio, as duplicate computations are no longer routed to edge resources or utilized cache. Through simulations, we demonstrate that our method significantly enhances cache reuse compared to the default method with no reuse, achieving an average improvement of over 57%. Furthermore, the speed up ratio of enhancement amounts to 15%. Notably, our method surpasses previous approaches by exhibiting the lowest average completion time, particularly when dealing with lower request frequencies. These findings highlight the efficacy and potential of our proposed method in optimizing edge computing performance. © The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature 2024.
Journal Of Engineering Research (23071877)
This article presents a comprehensive exploration of the architecture and various approaches in the domain of cloud computing and software-defined networks. The salient points addressed in this article encompass: Foundational Concepts: An overview of the foundational concepts and technologies of cloud computing, including software-defined cloud computing. Algorithm Evaluation: An introduction and evaluation of various algorithms aimed at enhancing network performance. These algorithms include Intelligent Rule-Based Metaheuristic Task Scheduling (IRMTS), reinforcement learning algorithms, task scheduling algorithms, and Priority-aware Semi-Greedy (PSG). Each of these algorithms contributes uniquely to optimizing Quality of Service (QoS) and data center efficiency. Resource Optimization: An introduction and examination of cloud network resource optimization based on presented results and practical experiments, including a comparison of the performance of different algorithms and approaches. Future Challenges: An investigation and presentation of challenges and future scenarios in the realm of cloud computing and software-defined networks. In conclusion, by introducing and analyzing simulators like Mininet and CloudSim, the article guides the reader in choosing the most suitable simulation tool for their project. Through its comprehensive analysis of the architecture, methodologies, and prevalent algorithms in cloud computing and software-defined networking, this article aids the reader in achieving a deeper understanding of the domain. Additionally, by presenting the findings and results of conducted research, it facilitates the discovery of the most effective and practical solutions for optimizing cloud network resources. © 2024 The Authors
Engineering Applications of Artificial Intelligence (09521976) 136
The integration of cyber-physical systems and artificial intelligent human activity recognition (HAR) applications enables intelligent interactions within a physical environment. In real-world HAR applications, the domain shift between training (source) and testing (target) images captured in different scenarios leads to low classification accuracy. Existing unsupervised domain adaptation (UDA) methods often require some labeled target samples for model adaptation, which limits their practicality. This study proposes a novel unsupervised deep domain adaptation algorithm (UDDAA) for HAR using recurrent neural networks. UDDAA introduces a maximum mean class discrepancy (MMCD) metric, accounting for both inter-class and intra-class differences within each domain. MMCD extends the maximum mean discrepancy to measure the class-level distribution discrepancy across source and target domains, aligning these distributions to enhance domain adaptation performance. Without relying on labeled target data, UDDAA predicts pseudo-labels for the unlabeled target dataset, combining these with labeled source data to train the model on domain-invariant representations. This approach makes UDDAA highly practical for scenarios where labeled target data is difficult or expensive to obtain, enabling human-computer interaction (HCI) systems to function effectively across varied environments and user behaviors. Extensive experiments on benchmark datasets demonstrate UDDAA's superior classification accuracy over existing baselines. Notably, UDDAA achieved 92% and 99% accuracy for University of Central Florida database (UCF) to Human Motion Database (HMDB) and HMDB to UCF transfers, respectively. Additionally, on personal recorded videos with complex backgrounds, it achieved high classification accuracies of 95% for basketball and 90% for football activities, underscoring its generalization ability, robustness, and effectiveness. © 2024
Computer Networks (13891286) 240
The escalating growth of content-dependent services and applications within the Internet of Things (IoT) platform has led to a surge in traffic, necessitating real-time data processing. Content caching has emerged as an effective solution to counteract this traffic upswing. Caching not only improves network efficiency but also enhances user service quality. Critical to the development of an optimal caching algorithm is the accurate prediction of future content popularity. This prediction hinges on the ability to anticipate users' content preferences, which is a pivotal method for assessing content popularity. In this study, we introduce a novel caching strategy termed User Preference-aware content Caching Strategy (UPCS) tailored for an IoT platform, where users access multimedia services offered by remote Content Providers (CPs). The UPCS encompasses three key algorithms: a content popularity prediction algorithm that utilizes Variational Autoencoders (VAE) to forecast users' future content preferences based on their prior requests, an online algorithm for dynamic cached content replacement, and a cooperative caching algorithm to augment caching efficiency. The proposed content caching strategy outperforms alternative methods, exhibiting superior cache hit rates and reduced Content Retrieval Delays (CRD). © 2023 Elsevier B.V.
Computer Networks (13891286) 224
The traditional method of saving energy in Virtual Machine Placement (VMP) is based on consolidating more virtual machines (VMs) in fewer servers and putting the rest in sleep mode, which may lead to the overheating of servers resulting in performance degradation and cooling cost. The lack of an accurate and computationally efficient model to describe the thermal condition of the data center environment makes it challenging to develop an effective and adaptive VMP mechanism. Although recently, data-driven approaches have acted successfully in model construction, the shortage of clean, adequate, and sufficient amounts of data put limits their generalizability. Moreover, any change in the data center configuration during operation, makes these models prone to error and forces them to repeat the learning process. Thus, researchers turn to applying model-free paradigms such as reinforcement learning. Due to the vast action-state space of real-world applications, scalability is one of the significant challenges in this area. In addition, the delayed feedback of environmental variables such as temperature give rise to exploration costs. In this paper, we present a decentralized implementation of reinforcement learning along with a novel state-action representation to perform the VMP in the data centers to optimize energy consumption and keep the host temperature as low as possible while satisfying Service Level Agreements (SLA). Our experimental results show more than 17% improvement in energy consumption and 12% in CPU temperature reduction compared to baseline algorithms. We also succeeded in accelerating optimal policy convergence after the occurrence of a configuration change. © 2023 Elsevier B.V.
Neurocomputing (09252312) 515pp. 107-120
In this paper, a low cost method for input size reduction without sacrificing accuracy is proposed, which reduces required computation resources for both training and inference of deep convolutional neural network (DCNN) in the steering control of self-driving cars. Efficient processing of DCNNs is becoming prominent challenge due to its huge computation cost, number of parameters and also inadequate computation resources on power efficient hardware devices and the proposed method alleviates the problem comparing the state of the art. The proposed method introduces feature density metric (FDM) as criterion to mask and filter out regions of input image that do not contain adequate amount of features. This filtering method prevents DCNN from useless calculations belongs to feature-free regions. Compared to PilotNet, the proposed method accelerates overall training and inference phases of end-to-end (ETE) deep steering control of self-driving cars up to 1.3× and 2.0× respectively. © 2022 Elsevier B.V.
Cluster Computing (13867857) 25(2)pp. 1015-1033
The remarkable growth of cloud computing applications has caused many data centers to encounter unprecedented power consumption and heat generation. Cloud providers share their computational infrastructure through virtualization technology. The scheduler component decides which physical machine hosts the requested virtual machine. This process is virtual machine placement (VMP) which, affects the power distribution, and thereby the energy consumption of the data centers. Due to the heterogeneity and multidimensionality of resources, this task is not trivial, and many studies have tried to address this problem using different methods. However, the majority of such studies fail to consider the cooling energy, which accounts for almost 30% of the energy consumption in a data center. In this paper, we propose a metaheuristic approach based on the binary version of gravitational search algorithm to simultaneously minimize the computational and cooling energy in the VMP problem. In addition, we suggest a self-adaptive mechanism based on fuzzy logic to control the behavior of the algorithms in terms of exploitation and exploration. The simulation results illustrate that the proposed algorithm reduced energy consumption by 26% in the PlanetLab Dataset and 30% in the Google cluster dataset relative to the average of compared algorithms. The results also indicate that the proposed algorithm provides a much more thermally reliable operation. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Digital Communications and Networks (23528648) 8(6)pp. 1085-1093
Accurately estimating of Retransmission TimeOut (RTO) in Content-Centric Networking (CCN) is crucial for efficient rate control in end nodes and effective interface ranking in intermediate routers. Toward this end, the Jacobson algorithm, which is an Exponentially Weighted Moving Average (EWMA) on the Round Trip Time (RTT) of previous packets, is a promising scheme. Assigning the lower bound to RTO, determining how an EWMA rapidly adapts to changes, and setting the multiplier of variance RTT have the most impact on the accuracy of this estimator for which several evaluations have been performed to set them in Transmission Control Protocol/Internet Protocol (TCP/IP) networks. However, the performance of this estimator in CCN has not been explored yet, despite CCN having a significant architectural difference with TCP/IP networks. In this study, two new metrics for assessing the performance of RTO estimators in CCN are defined and the performance of the Jacobson algorithm in CCN is evaluated. This evaluation is performed by varying the minimum RTO, EWMA parameters, and multiplier of variance RTT against different content popularity distribution gains. The obtained results are used to reconsider the Jacobson algorithm for accurately estimating RTO in CCN. Comparing the performance of the reconsidered Jacobson estimator with the existing solutions shows that it can estimate RTO simply and more accurately without any additional information or computation overhead. © 2022 Chongqing University of Posts and Telecommunications
Amirkabir Journal of Mechanical Engineering (20086032) 53(4 Special Issue)pp. 577-580
In this paper deep neural controller is evaluated in self-driving car application which is one of the most important and critical among human-in-the-loop cyber physical systems. To this aim, the modern controller is compared with two classic controllers, i.e. proportional–integral–derivative and model predictive control for both quantitative and qualitative parameters. The parameters reflect three main challenges: (i) design-time challenges like dependency to the model and design parameters, (ii) implementation challenges including ease of implementation and computation workload, and (iii) run-time challenges and parameters covering performance in terms of speed, accuracy, control cost and effort, kinematic energy and vehicle depreciation. The main objective of our work is to present comparison and concrete metrics for designers to compare modern and traditional controllers. A framework for design, implementation and evaluation is presented. An end-to-end controller, constituting six convolution layers and four fully connected layers, is evaluated as the modern controller. The controller learns human driving behaviors and is used to drive the vehicle autonomously. Our results show that despite the main advantages of the controller i.e. being model free and also trainable, in terms of important metrics, this controller exhibits acceptable performance in comparison with proportional–integral–derivative and model predictive controllers. © 2021, Amirkabir University of Technology. All rights reserved.
Industrial control systems (ICS) are applied in many critical infrastructures. Reducing reconfiguration time after hazard leads to safety improvement, so it is one of the most important objectives in these systems. Hazards can be due to the 'system failure' or 'cyber-attacks' factors. One of the procedures that can reduce the reconfiguration time is determining as soon as possible the cause of hazards based on the above mentioned factors. Differentiation of attack from failure without redundant data in addition to data from the system sensors is not possible. With advent of the IoT as IIoT, a condition is developed to provide the required redundant data; however, by increasing the number of IIoT devices within a factory, the generated data volume becomes too large. In this paper we describe a fog-based approach applied in a factory to deal with such increasing complexity. We compare the proposed method with a traditional cloud-based solution. According to the results, the proposed method leads to a reduction of 60% lost time in the recovery reconfiguration step of the system. © 2020 IEEE.
Future Generation Computer Systems (0167739X) 106pp. 518-533
Transport Control in Named Data Networking (NDN) architecture is a challenging task. The lack of end-to-end communications in this architecture makes traditional, timeout-driven transport control schemes inefficient and wasteful. Hop-by-hop transport control is an alternative solution to tackle this problem that because of the stateful forwarding plan of NDN can be applied more easily than the IP networks. Most existing solutions in this direction assume known link bandwidths and Data packet sizes or require a loop-free multipath forwarding strategy to work well, but these assumptions do not always hold true and there exist no a loop-free multipath forwarding strategy among the existing forwarding strategies for NDN. In this paper, a Responsibility-based Transport Control (RTC) protocol for NDN is proposed. This protocol does not make strong assumptions about the network and avoids looping paths by applying a window-based rate control mechanism and a capacity-aware, multipath forwarding strategy in each face. In RTC, routers maintain a congestion window in each face and decide on accepting or refusing to take responsibility for forwarding of a newly received Interest packet through exchanging three new control packets. These control packets provide reliable information for managing the congestion windows and capacity-aware traffic splitting in routers. They also enable diverse deployment scenarios for NDN such as IP-overly and wireless links. The RTC is implemented in ndnSIM and its capability in managing congestion, achieving high throughput and providing flow fairness are demonstrated through extensive simulations. © 2020 Elsevier B.V.
Multimedia Tools and Applications (13807501) 79(43-44)pp. 32999-33021
Distributed Video Coding with low computational complexity at the encoder side has a high potential for use in Wireless Multimedia Sensor Networks. However, the different architecture of this coding and resource constraints in WMSNs require the design of new efficient transmission protocols for transmission of DVC through WMSNs. In view of these protocols, error control mechanisms have more importance in reliable multimedia communication over WMSN. These mechanisms provide higher video quality in receiver nodes while saving the energy of sender nodes by the reliable transmission of packets. Given the importance of this issue, in this paper, we propose an adaptive, cross-layer error control scheme to protect video frames in the transmission of DVC over WMSN, which serves QOS while considering energy consumption and frames’ delay constraints. To propose this scheme, we used the accurate results from our previous works on error resiliency of DVC and comparative performance analysis of error control methods for this codec. The proposed scheme has been analyzed and compared to all standard, layer and, multi-layer error control schemes against the most important criteria in video communication over WSNs such as energy consumption, delay, and PSNR. Simulation results show that this scheme saves the quality of video in different channel conditions by consuming the least possible amount of energy based on the maximum allowable delay at the receiver. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.
IEEE Access (21693536) 8pp. 206942-206957
Safety and security of Industrial Control Systems (ICS) applied in many critical infrastructures is essential. In these systems, hazards can be due either to system failure or cyber-attacks factors. Accurate hazard detection and reducing reconfiguration time after hazard is one of the most important objectives in these systems. One of the procedures that can reduce the reconfiguration time is determining the cause of hazards and, based on the aforementioned factors, adopting the best commands in reconfiguration time. However, it is difficult to differentiate between different types of hazard because their effects on the system can be similar. With the advent of IoT into ICS, known as IIoT, it has become possible to differentiate the hazards through the adoption of data from different IIoT sensors in the environment. In this article, we propose a risk management approach that identifies hazards based on the physical nature of these systems with the support from the IIoT. The identified hazards fall into four categories: stealthy attack, random attack, transient failure, and permanent failure. Then, the reconfiguration process is run based on the proposed differentiation, which provides a better performance and reconfiguration time. In the experimental section, a fluid storage system is simulated, showing 97% correct differentiation of hazards and reducing in 60% the lost time in the system recovery reconfiguration. © 2013 IEEE.
IET Computers and Digital Techniques (1751861X) 14(3)pp. 97-106
Spin-transfer torque random access memory (STT-RAM) has emerged as an eminent choice for the larger on-chipcaches due to high density, low static power consumption and scalability. However, this technology suffers from long latency andhigh energy consumption during a write operation. Hybrid caches alleviate these problems by incorporating a write-friendlymemory technology such as static random access memory along with STT-RAM technology. The proper allocation of datablocks has a significant effect on both performance and energy consumption in the hybrid cache. In this study, the allocation andmigration problem of data blocks in the hybrid cache is examined and then modelled using integer linear programming (ILP)formulations. The authors propose an ILP model with three different objective functions which include minimising accesslatency, minimising energy and minimising energy-delay product in the hybrid cache. Evaluations confirm that the proposed ILPmodel obtains better results in terms of energy consumption and performance compared to the existing hybrid cachearchitecture. © The Institution of Engineering and Technology 2020.
Journal Of Information Systems And Telecommunication (23221437) 7(1)pp. 12-22
IEEE 802.11e is standardized to enhance real-time multimedia applications' Quality of Service. This standard introduces four access categories for different types of applications. Each access category has four adjustable parameters: Arbitrary Inter-Frame Space Number, minimum Size of Contention Window, maximum size of Contention Window, and a Transmission Opportunity limit. A Transmission Opportunity limit is the time interval, in which a wireless station can transmit a number of frames consecutively, without releasing the channel and any further contention with other wireless stations. Transmission Opportunity improves network throughput as well as service differentiation. Proper Transmission Opportunity adjustment can lead to better bandwidth utilization and Quality of Service provisioning. This paper studies the dynamic adjustment of Transmission Opportunity in IEEE 802.11e using a game-theory based approach called Game Theory Based Dynamic Transmission Opportunity. Based on the proposed method, each wireless node chooses its appropriate Transmission Opportunity according to its queue length and media access delay. Simulation results indicate that the proposed approach improves channel utilization, while preserving efficiency in WLANs and minimizing selfishness behaviors of stations in a distributed environment. © 2019 Iranian Academic Center for Education, Culture and Research.
Visual Computer (14322315) 35(10)pp. 1373-1391
Image rotation and scale change can significantly degrade the efficiency of local descriptors in blurred image matching. Conventional local image descriptors often only employ the rectangular gradient information of detected region around each interest point. Due to unwanted errors estimated for scale and dominant orientation, the performance of these local descriptors is severely degraded when applied to blurred images. To solve this problem, we propose a novel descriptor called radial and angular gradient intensity histogram (RAGIH) which jointly utilizes gradient and intensity features. In this local descriptor, feature vectors are extracted from two concentric circular regions around each key point and using angular and radial gradients in a specific local coordinate system reduces the estimation errors. Extensive experiments on challenging Oxford dataset demonstrate the favorable performance of our descriptor compared to state-of-the-art approaches. © 2018, Springer-Verlag GmbH Germany, part of Springer Nature.
The RPL protocol was provided for routing in the Internet of Things (IoT) network. This protocol may be under attack. One of the attacks in the RPL protocol is the sinkhole attack that, an attacker tries to attract nearby nodes and, as a result, it causes that many nodes pass their traffic through the attacker node. In the previous methods for detecting a sinkhole attack in the RPL protocol, the accuracy of the detection parameter has been important. In the present study, by providing a local detection method called DEEM and improving the overhead in terms of energy consumption associated with the detection method, also a proper detection accuracy was obtained. DEEM has two phases in each node called Information Gathering and Detection Phases. We implemented DEEM on Contiki OS and evaluated it using the Cooja simulator. Our assessment shows that, in simulated scenarios, DEEM has a low overhead in term of energy consumption, a high true positive rate, and a good detection speed, and this is a scalable method. The cost of DEEM overhead is small enough to be deployed in resource-constrained nodes. © 2019 IEEE.
Journal of Supercomputing (15730484) 75(10)pp. 6831-6854
Spin-transfer torque random access memory (STT-RAM) is a suitable alternative to DRAM in the large last-level caches (L3Cs) on account of low leakage, the absence of refresh energy and good scalability. However, long latency and high energy consumption for write operations are disadvantages of this technology. The proper utilization of row buffer locality can improve energy efficiency and mitigate negative effects of writing operations in the STT-RAM L3Cs. In this paper, we present an integer linear programming (ILP) formulation which minimizes energy consumption in the STT-RAM-based L3C exploiting the row buffer locality and the prominent features of STT-RAM. Since ILP solvers may not achieve the better result in a reasonable time, we propose a sub-optimal algorithm that obtains the results in a polynomial time. Evaluations demonstrate that on average, our ILP model reduces dynamic energy about 19% and improves row buffer hit rate about 23% compared to the state of the art. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
The architecture of current networks is static and nonprogrammable. In Software-Defined Networking (SDN) it is possible to have programmability and innovation within the network. In SDN, data plane and control plane are separated. So, the network operators can manage the network behavior using software. Some standardized interfaces such as OpenFlow are developed to enable the interaction among the controller and switches. OpenFlow switch manages the network traffic at the data plane. Packet parser is one of the main parts of OpenFlow switch. So far, some FPGA implementations have been presented for packet parsers for the SDN which lack required flexibility and programmability. These implementations support only one parse graph which causes limitations in creating new network protocols for new versions of the OpenFlow switch. To address this problem, in the present study the automatic generation of a programmable packet parser is presented for the OpenFlow switch. In addition to creating high flexibility in the switch, our proposed implementation is programmable and support different parse graph during the execution time. Simulation and implementation verify the appropriate performance of this programmable parser in OpenFlow Switch. Our implementation is able to improve the performance of the switch and enhance the flexibility of the data plane of SDN. The use of the programmable packet parser results in the increase in the switch speed, the reduction in wait time and service time, and the improvement in the Openflow switch performance. © 2019 IEEE.
Visual Computer (14322315) 34(11)pp. 1579-1595
One of the key challenges of current image matching techniques is how to build a robust local descriptor which is invariant to large variations in scale and rotation. To address this issue, in this work a polar gradient local oriented histogram pattern (PGP) is localized on normalized cropped regions around detected interest points. Then, a new image descriptor named two-dimensional intensity gradient histogram (2DIGH) is introduced using the joint histogram scheme. 2DIGH builds the extracted feature vector by intersecting of gradient and intensity information on subregions of the PGP. The measured distance with K-nearest neighbor represents feature vectors similarity/distance for image matching. The experimental results on Graffiti, Boat, Bark and ZuBud datasets indicate that the performance of the introduced 2DIGH is at least 41% better than other widely applied descriptors. © 2017, Springer-Verlag GmbH Germany.
Signal, Image and Video Processing (18631711) 12(6)pp. 1115-1123
In this paper, a new method for detecting abnormal events in public surveillance systems is proposed. In the first step of the proposed method, candidate regions are extracted, and the redundant information is eliminated. To describe appearance and motion of the extracted regions, HOG-LBP and HOF are calculated for each region. Finally, abnormal events are detected using two distinct one-class SVM models. To achieve more accurate anomaly localization, the large regions are divided into non-overlapping cells, and the abnormality of each cell is examined separately. Experimental results show that the proposed method outperforms existing methods based on the UCSD anomaly detection video datasets. © 2018, Springer-Verlag London Ltd., part of Springer Nature.
Multimedia Tools and Applications (13807501) 77(12)pp. 14767-14782
This paper presents a new method for detecting abnormal events in the surveillance systems. This method does not employ object detection or tracking and thus it does not fail in the crowded scenes. In the first step of proposed method, the appropriate cell size is determined by calculating the prevalent size of the connected components. Then, the redundant information is eliminated and the important regions are extracted from training data. This preprocessing significantly reduces the volume of training data in the learning phase. Next, using the HOG descriptors and a multivariate Gaussian model, the appearance anomalies i.e. the abnormality in terms of physical characteristics is detected. Besides that, a simple algorithm is provided to detect the abnormal motion using the average optical flow of the cells. Experimental results on the UCSD-PED2 datasets show that the proposed method can reliably detect abnormal events in video sequences, outperforming the current state-of-the-art methods. © 2017, Springer Science+Business Media, LLC.
Journal of Supercomputing (15730484) 74(3)pp. 1299-1320
Software-defined network (SDN) has had the evolution of the current network with the aim of removing its restrictions so that the data plane has been separated from its control plane. In the architecture of the SDN, the most controversial device is the OpenFlow Switch in that in the OpenFlow Switch, it is packets which are processed and investigated. Now, OpenFlow Switch versions 1.0 and 1.1 have been implemented on hardware platforms and support limited specifications of the OpenFlow. The present article is to design and implement the architecture of the OpenFlow v1.3 Switch on the Virtex ®-6 FPGA ML605 board because the FPGA platform has high flexibility, processing speed and reprogrammability. Although little research investigated performance parameters of the OpenFlow Switch, in the present study, the OpenFlow system (switch and controller) is to be implemented on the FPGA via the VHDL on the one hand, and performance parameters of the OpenFlow Switch and its stimulation performance is to be investigated via the ISE design suite on the other hand. In addition to enjoying high flexibility, this architecture has a consumer hardware at the level of other start-ups. The main advantage of the proposed design is that it increases the speed of packet pipeline processing in flow tables switch. Besides, it supports the features of the OpenFlow v1.3. Its parser supports 40 packet headers in the network and provides the possibility of switch development for next versions of the OpenFlow as easily as possible. © 2017, Springer Science+Business Media, LLC.
Nowadays network managers look for ways to change the design and management of networks that can make decisions on the control plane. Future switches should be able to support the new features and flexibility required for parsing and processing packets. One of the critical components of switches is the packet parser that processes the headers of the packets to be able to decide on the incoming packets. Here the data plane, and particularly packet parser in OpenFlow switches, which should have the flexibility and programmability to support the new requirements and OpenFlow multiple versions, are focused. Designed here is an architecture that unlike the static network equipments, it has the flexibility and programmability in the data plane network, especially the SDN network, and supports the parsing and processing of specific packets. To describe this architecture, a high-level P4 language is used to implement it on a reconfigurable hardware (i.e., FPGA). After automatic generating the protocol-independent Packet parser architecture on the Virtex-7, it is compiled to firmware by Xilinx SDNet, and ultimately an FPGA Platform is implemented. It has fewer consumption resources and it is more efficient in terms of throughput and processing speed in comparison with other architectures. © 2018 IEEE.
Multimedia Tools and Applications (13807501) 77(15)pp. 19547-19568
Distributed Video Coding (DVC) is a new approach in video coding which due to low computational complexity at the encoder side, has a great potential to be used in Wireless Multimedia Sensor Networks (WMSN). However, the different architecture of this codec affects the efficiency of transmission protocols and in order to efficient transmission of DVC over WMSN, it is necessary to evaluate the performance of the transmission protocols in the presence of DVC characteristics. In the view of these protocols, error control methods are important mechanisms that provide quality of service and robust multimedia communications. For this reason, we performed a comparative performance analysis for all error control schemes that consist of Automatic Repeat Request (ARQ), Forward Error Correction (FEC), Erasure Coding (EC), hybrid link layer ARQ/FEC and multi-layer hybrid error control schemes for DVC in WMSNs. These analyses are in the terms of the most importance metrics in multimedia communications over WSNs, such as objective and subjective video quality criteria, delay, energy consumption and some DVC-specific metrics. The results show the distinct behavior of DVC in the presence of channel error and can be used to propose an effective and efficient error control scheme for DVC over WMSN. © 2017, Springer Science+Business Media, LLC, part of Springer Nature.
Biologically Inspired Cognitive Architectures (2212683X) 20pp. 21-30
In this paper a new model of nonlinear dynamical system based on adaptive frequency oscillators for learning rhythmic signals is implemented by demonstration. This model uses coupled Hopf oscillators to encode and learn any periodic input signal. Learning process is completely implemented in the dynamics of adaptive oscillators. One of the issues in learning in such systems is constant number of oscillators in the feedback loop. In other words, the number of adaptive frequency oscillators is one of the design factors. In this contribution, it is shown that using enough number of oscillators can help the learning process. In this paper, we address this challenge and try to solve it in order to learn the rhythmic movements with greater accuracy, lower error and avoid missing fundamental frequency. To reach this aim, a method for generating drumming patterns is proposed which is able to generate rhythmic and periodic trajectories for a NAO humanoid robot. To do so, a programmable central pattern generator is used which is inspired from animal's neural systems and these programmable central pattern generators are extended to learn patterns with more accuracy for NAO humanoid robots. Successful experiments of demonstration learning are done using simulation and a NAO Real robot. © 2017 Elsevier B.V. All rights reserved.
Biologically Inspired Cognitive Architectures (2212683X) 19pp. 39-48
This paper presents a two layer system for imitation learning in humanoid robots. The first layer of this system records complicated and rhythmic movement of the trainer using a motion capture device. It solves an inverse kinematic problem with the help of an adaptive Neuro-Fuzzy Inference system. Then it can achieve angles records of any joints involved in the desired motion. The trajectory is given as input to the systems second layer. The layer deals with extracting optimal parameters of the trajectories obtained from the first layer using a network of oscillator neurons and Particle Swarm Optimization algorithm. This system is capable to obtain any complex motion and rhythmic trajectory via first layer and learns rhythmic trajectories in the second layer then converge towards all these movements. Moreover, this two layer system is able to provide various features of a learner model, for instance resistance against perturbations, modulation of trajectories amplitude and frequency. The simulation results of the learning system is performed in the robot simulator WEBOTS linked with MATLAB software. Practical implementation on an NAO robot demonstrate that the robot has learned desired motion with high accuracy. These results show that proposed system in this paper produces high convergence rate and low test error.
ACM Journal on Emerging Technologies in Computing Systems (15504832) 12(4)
Wireless Network-on-Chip (WNoC) architectures have emerged as a promising interconnection infrastructure to address the performance limitations of traditional wire-based multihop NOCs. Nevertheless, the WNoC systems encounter high failure rates due to problems pertaining to integration and manufacturing of wireless interconnection in nano-domain technology. As a result, the permanent failures may lead to the formation of any shape of faulty regions in the interconnection network, which can break down the whole system. This issue is not investigated in previous studies on WNoC architectures. Our solution advocates the adoption of communication structures with both node and link on disjoint paths. On the other hand, the imposed costs of WNoC design must be reasonable. Hence, a novel approach to design an optimized faulttolerant hybrid hierarchical WNoC architecture for enhancing performance as well as minimizing system costs is proposed. The experimental results indicate that the robustness of this newly proposed design is significantly enhanced in comparison with its the fault-tolerant wire-based counterparts in the presence of various faulty regions under both synthetic and application-specific traffic patterns. ©2016 ACM.
Neural Networks (08936080) 83pp. 94-108
In this paper a new design of neural networks is introduced, which is able to generate oscillatory patterns. The fundamental building block of the neural network is O-neurons that can generate an oscillation in its transfer functions. Since the natural policy gradient learning has been used in training a central pattern generator paradigm, it is called Natural Learner CPG Neural Networks (NLCPGNN). O-neurons are connected and coupled to each other in order to shape a network and their unknown parameters are found by a natural policy gradient learning algorithm. The main contribution of this paper is design of this learning algorithm which is able to simultaneously search for the weights and topology of the network. This system is capable to obtain any complex motion and rhythmic trajectory via first layer and learn rhythmic trajectories in the second layer and converge towards all these movements. Moreover this two layers system is able to provide various features of a learner model for instance resistance against perturbations, modulation of trajectories amplitude and frequency. Simulation of the learning system in the robot simulator (WEBOTS) that is linked with MATLAB software has been done. Implementation on a real NAO robot demonstrates that the robot has learned desired motion with high accuracy. These results show proposed system produces high convergence rate and low test errors. © 2016 Elsevier Ltd
Computers and Electrical Engineering (00457906) 46pp. 303-313
Content-based image retrieval systems are designed to retrieve images based on the high-level desires and needs of users. However, due to the use of low-level features, image retrieval systems are faced with the so-called semantic gap problem in describing high-level concepts. In order to address this critical problem, a new concept-based model is proposed in this paper. The proposed model retrieves images based on two conceptual layers. In the first layer, the object layer, the objects are detected using the discriminative part-based approach. The second layer, on the other hand, is designed to recognize visual composite, a higher level concept to specify the related co-occurring objects. In the proposed model, this concept is recognized by a new template structure including the appearance filters, constraints, and a set of parameters trained by latent SVM. Experiments are carried out on the well-known Pascal VOC dataset. Results show that the proposed model significantly outperforms the existing content-based approaches. © 2015 Elsevier Ltd.
Journal of Supercomputing (15730484) 71(8)pp. 3116-3148
Wireless network on chip (WNoC) is a promising new solution for overcoming the constraints in the traditional electrical interconnections. However, the occurrence of faults has become more prevalent because of the continuous shrinkage of CMOS technology and integration of wireless technology in such complex circuits. This can lead to formation of faulty regions on chip, where the probability of the entire system failure increases in a significant manner. This issue is not addressed in the previousworks onWNoC systems. In this article, a fault-tolerant hierarchical hybridWNoC architecture is proposed. First, an innovative strategy is proposed for solving the problem of fault-tolerant wireless routers placement in standard mesh networks inspired by node-disjoint communication structures. Next, efficient fault-tolerant communication protocols are presented for applying this structure. The experimental results demonstrate the robustness of this proposed architecture in the presence of various fault regions under different traffic patterns. © Springer Science+Business Media New York 2015.
Wireless Personal Communications (1572834X) 83(2)pp. 1101-1130
In unsupervised contention-based networks such as DCF mode of IEEE 802.11, wireless nodes compete to access the shared medium which is called random access or multiple access channel. The most important problem in such networks is the manner in which a node is selected to access the channel. In such networks, each node adjusts its channel access probability by tuning its contention window (CW) size. In case of excessive number of nodes, adjusting CW size irrespective of the number of competing nodes causes the network performance to reduce due to severe collisions. Game theory is a powerful tool for modeling, analysis and optimization of shared resources in competitive environments. In this study, the problem of channel access control is investigated in game theory framework. Specifically, based on the analytical models of DCF, a game theoretic approach, called GCW (game theoretic CW), is proposed to tune CW dynamically. Using GCW, each node can choose its CW autonomously, such that the overall network performance is improved. © 2015, Springer Science+Business Media New York.
Shahbazi, H. ,
Jamshidi, K. ,
Monadjemi, A. ,
Manoochehri, H.E. Robotica (02635747) 33(7)pp. 1551-1567
In this paper, a new design of neural networks is introduced, which is able to generate oscillatory patterns in its output. The oscillatory neural network is used in a biped robot to enable it to learn to walk. The fundamental building block of the neural network proposed in this paper is O-neurons, which can generate oscillations in its transfer functions. O-neurons are connected and coupled with each other in order to shape a network, and their unknown parameters are found by a particle swarm optimization method. The main contribution of this paper is the learning algorithm that can combine natural policy gradient with particle swarm optimization methods. The oscillatory neural network has six outputs that determine set points for proportional-integral-derivative controllers in 6-DOF humanoid robots. Our experiment on the simulated humanoid robot presents smooth and flexible walking. Copyright © Cambridge University Press 2014.
Computers and Electrical Engineering (00457906) 40(7)pp. 2062-2071
The 'semantic gap' is the main challenge in content-based image retrieval. To overcome this challenge, we propose a semantic-based model to retrieve images efficiently with considering user-interested concepts. In this model, an interactive image segmentation algorithm is carried out on the query image to extract the user-interested regions. To recognize the image objects from regions, a neural network classifier is used in this model. In order to have a general-purpose system, no priori assumptions should be made regarding the nature of images in extracting features. So a large number of features should be extracted from all aspect of the image. The high dimensional feature space, not only increases the complexity and required memory, but also may reduce the efficiency and accuracy. Hence, the ant colony optimization algorithm is employed to eliminate irrelevant and redundant features. To find the most similar images to the query image, the similarity between images is measured based on their semantic objects which are defined according to a predefined ontology. © 2014 Elsevier Ltd. All rights reserved.
Knowledge-Based Systems (09507051) 57pp. 8-27
A hierarchical paradigm for bipedal walking which consists of 4 layers of learning is introduced in this paper. In the Central Pattern Generator layer some Learner-CPGs are trained which are made of coupled oscillatory neurons in order to generate basic walking trajectories. The dynamical model of each neuron in Learner-CPGs is discussed. Then we explain how we have connected these new neurons with each other and built up a new type of neural network called Learner-CPG neural networks. Training method of these neural networks is the most important contribution of this paper. The proposed two-stage learning algorithm consists of learning the basic frequency of the input trajectory to find a suitable initial point for the second stage. In the next stage a mathematical path to the best unknown parameters of the neural network is designed. Then these neural networks are trained with some basic trajectories enable them to generate new walking patterns based on a policy. A policy of walking is parameterized by some policy parameters controlling the central pattern generator variables. The policy learning can take place in a middle layer called MLR layer. High level commands are originated from a third layer called HLDU layer. In this layer the focus is on training curvilinear walking in NAO humanoid robot. This policy should optimize total payoff of a walking period which is defined as a combination of smoothness, precision and speed. © 2013 Elsevier B.V. All rights reserved.
Manoochehri, H.E. ,
Jamshidi, K. ,
Monadjemi, A. ,
Shahbazi, H. International Journal of Humanoid Robotics (17936942) 11(3)
In this paper, a method to find curvilinear path features is proposed. These features are defined as centers and radiuses of circles that best fit to the curvature parts of the curvilinear path. In our previous research, we proposed a hierarchical layered paradigm for humanoid robot to learn how to walk in the curvilinear path. This model consists of four layers and each one has a specific purpose and is responsible to provide some feedbacks for the lower layer. In this study, we focus on the first layer which is high level decision unit responsible to provide some feedbacks and parameters for the lower layer using robot sensory inputs. The ultimate goal is that robot learn to walk in the curvilinear path and to reach this goal, the first step is to find robot position in the environment. In this work, Monte Carlo localization method is used for robot localization. Then we used artificial potential field to generate a path between robot and a goal. Finally, we proposed an algorithm that search the circles that best fit to the curvature parts of the path. Finding these features would help the learning process for lower layers in the learning model. We used robot camera as the only sensor to identify landmarks and obstacles for robot localization, path planning and finding curvilinear path features. © World Scientific Publishing Company.
Science China Information Sciences (1674733X) 57(6)pp. 1-11
One of the main problems in the VANET (vehicular ad-hoc network) routing algorithms is how to establish the stable routes. The link duration in these networks is often very short because of the frequent changes in the network topology. Short link duration reduce the network efficiency. Different speeds of the vehicles and choosing different directions by the vehicles in the junctions are the two reasons that lead to link breakage and a reduction in link duration. Several routing protocols have been proposed for VANET in order to improve the link duration, while none of them avoids the link breakages caused by the second reason. In this paper, a new method for routing algorithms is proposed based on the vehicles trips history. Here, each vehicle has a profile containing its movement patterns extracted from its trips history. The next direction which each vehicle may choose at the next junction is predicted using this profile and is sent to other vehicles. Afterward each vehicle selects a node the future direction of which is the same as its predicted direction. Our case study indicates that applying our proposed method to ROMSGP (receive on most stable group-path) routing protocol reduces the links breakages and increases the link duration time. © 2014 Science China Press and Springer-Verlag Berlin Heidelberg.
International Journal Of Communication Networks And Information Security (20760930) 5(2)pp. 93-103
The 802.11 families are considered as the most applicable standards for Wireless Local Area Networks (WLANs) where nodes make access to the wireless media using random access techniques. In such networks, each node adjusts its contention window to the minimum size irrespective to the number of competing nodes. So in the case of large number of nodes, the network performance is reduced because of raising the collision probability. In this paper, a game theory based method is being proposed to adjust the users' contention window in improving the network throughput, delay and packet drop ratio under heavy traffic load circumstances. The system performance, evaluated by simulations, shows some superiorities of the proposed method over 802.11-DCF (Distribute Coordinate Function.
IEEE Communications Surveys and Tutorials (1553877X) 15(3)pp. 1062-1087
As a mathematical tool, game theory has been used for the analysis of multi-agent systems. Wireless networks are typical examples of such systems, in which communicating nodes access the channel through the CSMA method influencing the other neighboring nodes' access. Different games were examined to model such an environment and investigate its challenging issues. This research reviews different CSMA games presented for wireless MAC and classifies them. Advantages and shortcomings of these games will be recounted and some open research directions for future research, supported. © 2013 IEEE.
In unsupervised contention-based networks such as EDCA mode of IEEE 802.11(e)(s), upon winning the channel, each node gets a transmission opportunity (TXOP) in which the node can transmit multiple frames consequently without releasing the channel. Adjusting TXOP can lead to better bandwidth utilization and QoS provisioning. To improve WLAN throughput performance, EDCA packet bursting can be used in 802.11e, meaning that once a station has gained an EDCA-TXOP, it can be allowed to transmit more than one frame without re-contending for the channel. Following the access to the channel, the station can send multiple frames as long as the total access time does not exceed the TXOP Limit. This mechanism can reduce the network overhead and increase the channel utilization instead. However, packet bursting may cause unfairness in addition to increasing jitter, delay and loss. To the best of the authors' knowledge, although TXOP tuning has been investigated through different methods, it has not been considered within a game theory framework. In this study, based on the analytical models of EDCA, a game theoretic approach called GTXOP is proposed to determine TXOP dynamically (i.e. according to the dynamisms of WLAN networks and the number of nodes in the network). Using GTXOP, each node can choose its TXOP autonomously, such that in addition to QoS improvement, the overall network performance is also improved. © 2013 Ghazvini et al.
International Journal of Advanced Robotic Systems (17298814) 10
In the present article, a method for generating curvilinear bipedal walking patterns is proposed which is able to generate rhythmic and periodic trajectories for a Nao soccer player robot. To do so, a programmable central pattern generator was used which was inspired from locomotion structures in vertebrate animals. In this paper, the programmable central pattern generators were extended and new Equations were added to make a curvilinear pattern for walking Nao robots on a specified circular curve. In addition, some specific Equations were added to the model to control the arms and synchronize them with the movement of the feet. The model uses some sensory inputs to obtain some feedback from the movement and adjust it conforming to the potential perturbations. Input sensory values consist of accelerator values and foot pressure sensor values located on the bottom of each foot. Feedback values can adopt walking to some desired specifications and compensate the effects of some types of perturbations. The proposed model has many benefits including smooth walking patterns and modulation during walking. This model can be extended and used in the Nao soccer player both for the standard platform and the 3D soccer simulation leagues of Robocup SPL competitions to train different types of motions. © 2013 Shahbazi et al.; licensee InTech.
Applied Intelligence (0924669X) 36(3)pp. 685-697
Predicting the next movement directions, which will be chosen by the vehicle driver at each junction of a road network, can be used largely in VANET (Vehicular Ad-Hoc Network) applications. The current methods are based on GPS. In a number of VANET applications the GPS service is faced with some obstacles such as high-rise buildings, tunnels, and trees. In this paper, a GPS-free method is proposed to predict the vehicle future movement direction. In this method, vehicle motion paths are described by using the sequence of turning directions on the junctions, and the distances between the junctions. Movement patterns of the vehicles are extracted through clustering of the vehicle's motion paths using SOM (Self Organizing Map). These patterns are then used for predicting the next movement direction, which will be chosen by the driver at the next junction. The obtained results indicate that our GPS-free method is comparable with the GPS-based methods, while having more advantages in different applications regarding urban traffic. © 2011 Springer-Verlag.
Australasian Physical and Engineering Sciences in Medicine (18795447) 35(2)pp. 135-150
This paper presents a fully automated approach to detect the intima and media-Adventitia borders in intravascular ultrasound images based on parametric active contour models. To detect the intima border, we compute a new image feature applying a combination of short-term autocorrelations calculated for the contour pixels. These feature values are employed to define an energy function of the active contour called normalized cumulative short-term autocorrelation. Exploiting this energy function, the intima border is separated accurately from the blood region contaminated by high speckle noise. To extract media-Adventitia boundary, we define a new form of energy function based on edge, texture and spring forces for the active contour. Utilizing this active contour, the media-Adventitia border is identified correctly even in presence of branch openings and calcifications. Experimental results indicate accuracy of the proposed methods. In addition, statistical analysis demonstrates high conformity between manual tracing and the results obtained by the proposed approaches. © Australasian College of Physical Scientists and Engineers in Medicine 2012.
Applied Mechanics and Materials (discontinued) (16627482) 110pp. 5161-5166
In this paper we will introduce a new learning approach for curvilinear bipedal walking of Nao humanoid robot using policy gradient method. A policy of walking is modeled by some policy parameters controlling some factors in programmable central pattern generators. A "Programmable" central pattern generator is made with coupled nonlinear oscillators capable to shape their state equations with some training trajectories. The proposed model has many benefits including smooth walking patterns, and modulation during walking to increase or decrease its speed. A suitable curvilinear walk was achieved, which is very similar to human ordinary walking. This model can be extended and used in Nao soccer player both in standard platform and 3D soccer simulation leagues of Robocup competitions to train different types of motions. © (2012) Trans Tech Publications, Switzerland.
IEEJ Transactions on Electrical and Electronic Engineering (19314973) 7(3)pp. 329-333
Vehicular ad hoc networks will enable a variety of applications for safety, traffic efficiency, driver assistance, and infotainment in modern automobile designs. For many of these applications as well as for improving the stability of vehicular ad hoc network routing algorithms, it is necessary to know whether a steering wheel rotation has led to a change in the vehicle motion path. The problem is that some steering wheel rotations are temporary and do not lead to a change in the vehicle motion path. In this paper, a GPS-free fuzzy sensor is designed for detecting the change of vehicle motion paths. The implementation results show acceptable precision. © 2012 Institute of Electrical Engineers of Japan.
Industrial Robot (17585791) 39(2)pp. 136-145
Purpose - The purpose of this paper is to model a motor region named the mesencephalic locomotors region (MLR) which is located in the end part of the brain and first part of the spinal cord. This model will be used for a Nao soccer player humanoid robot. It consists of three main parts: High Level Decision Unit (HLDU), MLR-Learner and the CPG layer. The authors focus on a special type of decision making named curvilinear walking. Design/methodology/ approach - The authors' model is based on stimulation of some programmable central pattern generators (PCPGs) to generate curvilinear bipedal walking patterns. PCPGs are made from adaptive Hopfs oscillators. High level decision, i.e. curvilinear bipedal walking, will be formulated as a policy gradient learning problem over some free parameters of the robot CPG controller. Findings - The paper provides a basic model for generating different types of motions in humanoid robots using only simple stimulation of a CPG layer. A suitable and fast curvilinear walk has been achieved on a Nao humanoid robot, which is similar to human ordinary walking. This model can be extended and used in other types of humanoid. Research limitations/implications - The authors' work is limited to a special type of biped locomotion. Different types of other motions are encouraged to be tested and evaluated by this model. Practical implications - The paper introduces a bio-inspired model of skill learning for humanoid robots. It is used for curvilinear bipedal walking pattern, which is a beneficial movement in soccer-playing Nao robots in Robocup competitions. Originality/value - The paper uses a new biological motor concept in artificial humanoid robots, which is the mesencephalic locomotor region. © Emerald Group Publishing Limited.
Pattern Recognition Letters (01678655) 33(5)pp. 543-553
This paper presents a novel energy function for active contour models based on autocorrelation function, which is capable of detecting small objects against a cluttered background. In the proposed method, image features are calculated using a combination of short-term autocorrelations (STA) computed from the image pixels to represent region information. The obtained features are exploited to define an energy function for the localized region-based active contour model called normalized accumulated short-term autocorrelation (NASTA). Minimizing this energy function, we can accurately detect small objects in images containing cluttered and textured backgrounds. Moreover, the proposed method provides high robustness against random noise and can precisely locate small objects in noisy backgrounds, difficult to be detected with naked eye. Experimental results indicate remarkable advantages of our approach comparing to existing methods. © 2011 Elsevier B.V. All rights reserved.
Wireless sensor networks are composed of sensors with low computational and energy resources. Transmission of data is one of the most energy consuming operations in such networks. In-network data aggregation is a popular technique which is performed to reduce data transmissions. However, by aggregating multiple data into one, the security of them is no longer guaranteed. While concealed data aggregation methods have been recently proposed to provide energy efficient, end-to-end confidentiality, not much work has been done to enhance them with aggregate integrity. In this work, we propose a scalable and energy efficient bihomomorphic method to preserve end-to-end confidentiality and aggregate integrity against outsider attacks. A distributed validation scheme is used to prevent blind rejection caused by outsiders. To provide better resilience, a key refreshment method is used to prevent analysis attacks. © 2011 IEEE.
Expert Systems with Applications (09574174) 38(9)pp. 11722-11729
Texture image segmentation is an important issue in computer vision applications. Active contour models are one of the powerful tools that are able to detect and segment textured objects against textured backgrounds. However, problems concerning the speed of the contour convergence in the texture image have limited their utility. This paper presents a fast and efficient texture energy function in the parametric active contour models. In the proposed method, we apply a novel version of the Walsh-Hadamard transform, called the Directional Walsh-Hadamard Transform or DWHT for calculating texture features of the energy function. This DWHT-based energy function is fast and easy to implement, and hence suitable for real time applications. We will show that the proposed method can reduce the execution time, while maintaining close accuracy and consequently it is more efficient than the previous active contour based methods for texture image segmentation. © 2011 Elsevier Ltd. All rights reserved.
Wireless sensor networks are networks with nodes of low power and limited processing. Therefore, optimal consumption of energy for WSN protocols seems essential. In a number of WSN applications, sensor nodes sense data periodically from environment and transfer it to the sink. Because of limitation in energy and selection of best route, for the purpose of increasing network lifetime a node with most energy level will be used for transmission of data. The most part of energy in nodes is wasted on radio transmission; thus decreasing number of transferred packets in the network will result in increase in node and network lifetimes. In algorithms introduced for data transmission in such networks up to now, a single route is used for data transmissions that results in decrease in energy of nodes located on this route which in turn results in shortened network lifetime. In this paper a new method is proposed for selection of data transmission route that is able to solve this problem. This method is based on learning automata that selects the route with regard to energy parameters and the distance to sink. In this method energy of network nodes finishes rather simultaneously preventing break down of network into 2 separate parts. This will result in increased lifetime. Simulation results show that this method has been very effective in increasing network lifetime. © 2010 IEEE.
Wireless Mesh Network (WMN) is considered to be an effective solution to support multimedia services in last miles due to their automatic configuration and low cost deployment. The main feature of WMNs is multi-hope communications which may result in increased region coverage, better robustness and more capacity. Implemented on limited radio range wireless media, WMNs bring about many challenges such as fading alleviation, effective media access control, efficient routing, quality of service provisioning, call admission control and scheduling. In this paper main concepts of scheduling in mesh networks are introduced and its basic techniques in WMNs are reviewed. © 2010 IEEE.
In this paper some parameters that reasons data redundancy and kinds of data redundancy in wireless sensor networks are defined, then introduced a clustering algorithm for data reduction in wireless sensor network using benefits of sensor network such as each node receives its neighbor's data, correlation between neighbors and low rate of changing in environmental data. New introduced algorithm made better using of energy and bandwidth that are two restrictions in wireless sensor networks. Simulation results show that about 30% to 80% approve in energy consumption will be attain in new introduced algorithm for environmental low changing rate data. ©2009 IEEE.
World Academy of Science, Engineering and Technology (20103778) 37pp. 901-904
In this article, a method has been offered to classify normal and defective tiles using wavelet transform and artificial neural networks. The proposed algorithm calculates max and min medians as well as the standard deviation and average of detail images obtained from wavelet filters, then comes by feature vectors and attempts to classify the given tile using a Perceptron neural network with a single hidden layer. In this study along with the proposal of using median of optimum points as the basic feature and its comparison with the rest of the statistical features in the wavelet field, the relational advantages of Haar wavelet is investigated. This method has been experimented on a number of various tile designs and in average, it has been valid for over 90% of the cases. Amongst the other advantages, high speed and low calculating load are prominent. © 2009 WASET.ORG.
The basis of procedure in query based sensor networks is such that the component nodes of network wait for receiving query from sink and once they took the query, they provide sink with required data. An issue in this kind of networks is the way that location of responder node is calculated. This requires addition of localization mechanisms or localization hardware. The former has communication overhead and the latter has cost overhead and power consumption, both of which result in reduction in life time of the network. Since in most of these networks the exact location of nodes is not required and having the approximate location of nodes is sufficient, a model is presented in this paper that calculates the approximate location of nodes. In this model nodes can obtain their approximate location without any need to extra hardware. In this model each node can obtain its location using query packets received from different sinks. Thus there is no need to communication overhead for calculating the location as well. © 2009 IEEE.
Environmental conditions, energy, bandwidth, storage and processing constraints, application of wireless sensor network, especially in real-time usage faces serious challenges. Thus, providing methods for reliable data transmission at desirable rate which takes into consideration energy constraint is of great importance. In this paper, a routing metric is introduced for selection of intermediate nodes which transmit data between source and destination. This metric takes into account remaining energy of the node, buffer capacity, and transmission delay and link quality. It can be applied for directed diffusion. By analysis and simulation, we show its efficiency with respect to network lifetime, load balancing capability and end-to-end delay reduction. ©2008 IEEE.
Neural Computing And Applications (09410643) 17(2)pp. 193-200
Optimizing the traffic signal control has an essential impact on intersections efficiency in urban transportation. This paper presents a two-stage method for intersection signal timing control. First, the traffic volume is predicted using a neuro-fuzzy network called Adaptive neuro-fuzzy inference system (ANFIS). The inputs of this network include two-dimensional, hourly and daily, traffic volume correlations. In the second stage, appropriate signal cycle and optimized timing of each phase of the signal are estimated using a combination of Self Organizing and Hopfield neural networks. The energy function of the Hopfield network is based on a traffic model derived by queuing analysis. The performance of the proposed method has been evaluated for real data. The two-dimensional correlation presents superior performance compared to hourly traffic correlation. The evaluation of proposed overall method shows considerable intersection throughput improvement comparing to the results taken form Synchro software. © 2007 Springer-Verlag London Limited.
Annals of DAAAM and Proceedings of the International DAAAM Symposium (17269679) pp. 345-346
One of the problems in middle size robot soccer, which also can be applied to scanner robots routing in unpredictable environments, is leading passing among robots that comprises choosing the best team-mate to receive the ball without any need to explicit communication among robots. In this paper we have developed an algorithm based on Perceptron neural network for this problem which determines the best passing angle based on the topological data of the play field (i.e. position of robots). With a modification to Perceptron structure and proper data presentation approach a considerable improvement in solution performance has been achieved.
Electric Power Components and Systems (15325016) 31(5)pp. 513-524
This paper presents a new model for the identification of the power system transfer functions. The usual model has been to use the shift operator q, or its equivalent z transform, but this gives inaccurate results with the small sampling times that are now used in modern controllers. It is shown by a comparison that this problem can be resolved by using the delta operator δ instead. This is shown by a multimachine example using both operators. The simulation results show that the delta operator formulation reflects the dynamic behavior of the system more accurately. © 2003 Taylor & Francis.