مرتب سازی بر اساس: سال انتشار
(نزولی)
Digital Communications and Networks (23528648) 11(2)pp. 574-586
In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, commonly used in data transmission protocols, increases transmission delay and consumes excessive bandwidth. To overcome this issue, forward error correction techniques, e.g., Random Linear Network Coding (RLNC) can be used in data transmission. The primary challenge in RLNC-based methodologies is sustaining a consistent coding ratio during data transmission, leading to notable bandwidth usage and transmission delay in dynamic network conditions. Therefore, this study proposes a new block-based RLNC strategy known as Adjustable RLNC (ARLNC), which dynamically adjusts the coding ratio and transmission window during runtime based on the estimated network error rate calculated via receiver feedback. The calculations in this approach are performed using a Galois field with the order of 256. Furthermore, we assessed ARLNC's performance by subjecting it to various error models such as Gilbert Elliott, exponential, and constant rates and compared it with the standard RLNC. The results show that dynamically adjusting the coding ratio and transmission window size based on network conditions significantly enhances network throughput and reduces total transmission delay in most scenarios. In contrast to the conventional RLNC method employing a fixed coding ratio, the presented approach has demonstrated significant enhancements, resulting in a 73% decrease in transmission delay and a 4 times augmentation in throughput. However, in dynamic computational environments, ARLNC generally incurs higher computational costs than the standard RLNC but excels in high-performance networks. © 2024 Chongqing University of Posts and Telecommunications
Computing (14365057) 107(6)
With the advancement of the Internet of Things (IoT) and the changing needs of edge computing applications within the TCP/IP architecture, several challenges have emerged. One solution to these challenges is to integrate edge computing with information-centric networks (ICN). In ICN-based edge computing, there is a high level of similarity in request computing due to the proximity of users, which is leveraged to improve the efficiency of computation reuse. Computation reuse occurs through naming, caching, and forwarding. Computation reuse through forwarding means that similar requests are directed to the same compute node (CN). In many past works, forwarding algorithms for computation reuse have been used with high overhead for resource discovery or did not consider the important criterion of assessing the capacity of CNs. In this paper, we propose two forwarding algorithms, named TLCF) Trade-Off Between Load Balancing and Computation Reuse Forwarding) and AFCT (Adaptive Forwarding Considering Capacity Threshold), that measures criteria for selecting the best CN, the trade-off between computation reuse and load balancing, while also considering capacity. These two aspects lead to a reduction in completion time. Computation reuse inherently disrupts load balancing. The evaluation was conducted using the ndnSIM simulation. Through simulations, we have shown that our method significantly reduces completion time compared to the default method, achieving an improvement of approximately 22%. These findings highlight the efficiency and potential of our proposed method in optimizing edge computing performance. © The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature 2025.
IEEE Internet of Things Journal (23274662) 11(7)pp. 12815-12822
Edge-Cloud Computing Industrial Internet of Things (ECIIoT) is composed of edge and cloud nodes with Industrial Internet of Things (IIoT) devices to get the service function chain (SFC). The service function chaining placement refers to a series of virtual network functions (VNFs) that are run at edge or cloud nodes in the form of software instances. In the problem of ECIIoT service embedding, the multiple VNFs must be placed for IIoT devices, so how these virtual functions are placed at cloud or edge nodes to minimize the delay is challenging to achieve. In this article, the placement of virtual functions with considering the edge and cloud nodes is proposed. In our model, the cloud server with edge nodes can run the required functions of IIoT devices in the SFC to decrease the imposed delay and use the computation resource in an efficient way. This is formed as an optimization problem to minimize the delay and residual computing resource consumption and reuse the previous functions. The exact solution of this problem is not available in polynomial time, therefore an efficient approximation algorithm is proposed which solves the problem in three stages. First, it linearizes the nonlinear objective function and constraint and approximates them by the convexity of these functions. Then, it solves the relaxed linear problem and finally, it rounds the decision variables in a heuristic way. This solution not only has polynomial time computational complexity but also obtains the near-optimal solution. The simulation results confirm the effectiveness of this approach. © 2014 IEEE.
Computing (14365057) 106(9)pp. 2949-2969
In edge computing, repetitive computations are a common occurrence. However, the traditional TCP/IP architecture used in edge computing fails to identify these repetitions, resulting in redundant computations being recomputed by edge resources. To address this issue and enhance the efficiency of edge computing, Information-Centric Networking (ICN)-based edge computing is employed. The ICN architecture leverages its forwarding and naming convention features to recognize repetitive computations and direct them to the appropriate edge resources, thereby promoting “computation reuse”. This approach significantly improves the overall effectiveness of edge computing. In the realm of edge computing, dynamically generated computations often experience prolonged response times. To establish and track connections between input requests and the edge, naming conventions become crucial. By incorporating unique IDs within these naming conventions, each computing request with identical input data is treated as distinct, rendering ICN’s aggregation feature unusable. In this study, we propose a novel approach that modifies the Content Store (CS) table, treating computing requests with the same input data and unique IDs, resulting in identical outcomes, as equivalent. The benefits of this approach include reducing distance and completion time, and increasing hit ratio, as duplicate computations are no longer routed to edge resources or utilized cache. Through simulations, we demonstrate that our method significantly enhances cache reuse compared to the default method with no reuse, achieving an average improvement of over 57%. Furthermore, the speed up ratio of enhancement amounts to 15%. Notably, our method surpasses previous approaches by exhibiting the lowest average completion time, particularly when dealing with lower request frequencies. These findings highlight the efficacy and potential of our proposed method in optimizing edge computing performance. © The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature 2024.
Cluster Computing (13867857) 27(4)pp. 4537-4550
Connected devices in IoT continuously generate monitoring and measurement data to be delivered to application servers or end-users. Transmitting IoT data through networks would lead to congestion and long delays. NDN is an emerging network paradigm based on name-identified data known to be an appropriate architecture for supporting IoT networks. In-network caching is one of the main advantages of NDN, a major issue discussed in many studies. One of the significant challenges for some IoT data is the transiency nature, making the data caching mechanism different. IoT data such as ambient monitoring in urban areas and tracking current traffic conditions are often transient, which means these data have a limited lifetime and then expire. In the proposed approach, data placement is decided upon based on the data lifetime and node position. Data lifetime is an essential property that must be involved in caching methods; consequently, the data are classified based on the data lifetime, and specific nodes are selected for caching according to defined classes and nodes’ positions in topology. Based on the proposed scheme, the nodes with the highest outgoing interface count or the edge nodes are selected for data caching. By considering both data lifetime and node location, we determine the suitable caching location for each data class separately. In addition, we remove data that has a short lifetime and is not suitable for caching from the caching mechanism of NDN nodes. By considering both the cache and data placements for transient data, a more comprehensive view is grasped in improving the caching performance. This issue, which has not been addressed in the available studies run on IoT data caching, can lead to the appropriate use of available storage and reduce redundancy. Eventually, the simulation results performed by the ndnSIM simulator show the proposed method could improve the cache mechanism efficiency in terms of both delay and hit ratio. Comparison results of the proposed method with CE2 and Btw indicate that this method can provide a reduction in average delay and an increase in cache hit ratio. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
Engineering Applications of Artificial Intelligence (09521976) 136
The integration of cyber-physical systems and artificial intelligent human activity recognition (HAR) applications enables intelligent interactions within a physical environment. In real-world HAR applications, the domain shift between training (source) and testing (target) images captured in different scenarios leads to low classification accuracy. Existing unsupervised domain adaptation (UDA) methods often require some labeled target samples for model adaptation, which limits their practicality. This study proposes a novel unsupervised deep domain adaptation algorithm (UDDAA) for HAR using recurrent neural networks. UDDAA introduces a maximum mean class discrepancy (MMCD) metric, accounting for both inter-class and intra-class differences within each domain. MMCD extends the maximum mean discrepancy to measure the class-level distribution discrepancy across source and target domains, aligning these distributions to enhance domain adaptation performance. Without relying on labeled target data, UDDAA predicts pseudo-labels for the unlabeled target dataset, combining these with labeled source data to train the model on domain-invariant representations. This approach makes UDDAA highly practical for scenarios where labeled target data is difficult or expensive to obtain, enabling human-computer interaction (HCI) systems to function effectively across varied environments and user behaviors. Extensive experiments on benchmark datasets demonstrate UDDAA's superior classification accuracy over existing baselines. Notably, UDDAA achieved 92% and 99% accuracy for University of Central Florida database (UCF) to Human Motion Database (HMDB) and HMDB to UCF transfers, respectively. Additionally, on personal recorded videos with complex backgrounds, it achieved high classification accuracies of 95% for basketball and 90% for football activities, underscoring its generalization ability, robustness, and effectiveness. © 2024
Computer Networks (13891286) 224
The traditional method of saving energy in Virtual Machine Placement (VMP) is based on consolidating more virtual machines (VMs) in fewer servers and putting the rest in sleep mode, which may lead to the overheating of servers resulting in performance degradation and cooling cost. The lack of an accurate and computationally efficient model to describe the thermal condition of the data center environment makes it challenging to develop an effective and adaptive VMP mechanism. Although recently, data-driven approaches have acted successfully in model construction, the shortage of clean, adequate, and sufficient amounts of data put limits their generalizability. Moreover, any change in the data center configuration during operation, makes these models prone to error and forces them to repeat the learning process. Thus, researchers turn to applying model-free paradigms such as reinforcement learning. Due to the vast action-state space of real-world applications, scalability is one of the significant challenges in this area. In addition, the delayed feedback of environmental variables such as temperature give rise to exploration costs. In this paper, we present a decentralized implementation of reinforcement learning along with a novel state-action representation to perform the VMP in the data centers to optimize energy consumption and keep the host temperature as low as possible while satisfying Service Level Agreements (SLA). Our experimental results show more than 17% improvement in energy consumption and 12% in CPU temperature reduction compared to baseline algorithms. We also succeeded in accelerating optimal policy convergence after the occurrence of a configuration change. © 2023 Elsevier B.V.
Journal of the Chinese Institute of Engineers, Transactions of the Chinese Institute of Engineers,Series A (02533839) 46(2)pp. 107-117
Wireless channels have a broadcast nature based on which opportunistic routing protocols work. In opportunistic routing protocols, packets are forwarded by the intermediate nodes that hear their transmissions, which are called candidate forwarders. In most of these algorithms, the forwarder list is pre-determined. However, the energy-efficient selection of the forwarder list is a research topic that is not considered well. Energy Efficient OPportunistic Routing algorithm (EEOPR) is presented in this paper in which the forwarders are determined on the packets’ fly. EEOPR is a flexible method that performs the routing process locally, and the candidate forwarders are selected during the routing and for each packet. The process of the candidate nodes’ selection and their packet forwarding are managed by the Genetic algorithm according to the nodes’ remaining energy and their regions. Simulation results show that network performance is improved in EEOPR compared to ROMER and CORP-M in terms of throughput, the number of duplicate packets, and the network nodes’ residual energy. © 2023 The Chinese Institute of Engineers.
Cluster Computing (13867857) 25(2)pp. 1015-1033
The remarkable growth of cloud computing applications has caused many data centers to encounter unprecedented power consumption and heat generation. Cloud providers share their computational infrastructure through virtualization technology. The scheduler component decides which physical machine hosts the requested virtual machine. This process is virtual machine placement (VMP) which, affects the power distribution, and thereby the energy consumption of the data centers. Due to the heterogeneity and multidimensionality of resources, this task is not trivial, and many studies have tried to address this problem using different methods. However, the majority of such studies fail to consider the cooling energy, which accounts for almost 30% of the energy consumption in a data center. In this paper, we propose a metaheuristic approach based on the binary version of gravitational search algorithm to simultaneously minimize the computational and cooling energy in the VMP problem. In addition, we suggest a self-adaptive mechanism based on fuzzy logic to control the behavior of the algorithms in terms of exploitation and exploration. The simulation results illustrate that the proposed algorithm reduced energy consumption by 26% in the PlanetLab Dataset and 30% in the Google cluster dataset relative to the average of compared algorithms. The results also indicate that the proposed algorithm provides a much more thermally reliable operation. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Digital Communications and Networks (23528648) 8(6)pp. 1085-1093
Accurately estimating of Retransmission TimeOut (RTO) in Content-Centric Networking (CCN) is crucial for efficient rate control in end nodes and effective interface ranking in intermediate routers. Toward this end, the Jacobson algorithm, which is an Exponentially Weighted Moving Average (EWMA) on the Round Trip Time (RTT) of previous packets, is a promising scheme. Assigning the lower bound to RTO, determining how an EWMA rapidly adapts to changes, and setting the multiplier of variance RTT have the most impact on the accuracy of this estimator for which several evaluations have been performed to set them in Transmission Control Protocol/Internet Protocol (TCP/IP) networks. However, the performance of this estimator in CCN has not been explored yet, despite CCN having a significant architectural difference with TCP/IP networks. In this study, two new metrics for assessing the performance of RTO estimators in CCN are defined and the performance of the Jacobson algorithm in CCN is evaluated. This evaluation is performed by varying the minimum RTO, EWMA parameters, and multiplier of variance RTT against different content popularity distribution gains. The obtained results are used to reconsider the Jacobson algorithm for accurately estimating RTO in CCN. Comparing the performance of the reconsidered Jacobson estimator with the existing solutions shows that it can estimate RTO simply and more accurately without any additional information or computation overhead. © 2022 Chongqing University of Posts and Telecommunications
Computer Networks (13891286) 213
In-Network caching is one of the most prominent features of Information-Centric Networks (ICN). This feature, designed to reduce the content delivery time, can improve data availability, especially during the sleeping mode of the original data producer. However, the fact that caching mechanism is simultaneously performed with data forwarding, limited processing power, and limited memory capacity of routers has posed challenges in fully profiting from this feature. The synchronization of caching mechanism with data forwarding limits the speed of decision-making in content placement policies. These challenges as well as the limitations of ICN routers have not been taken into consideration by the existing strategies for content placement of IoT data. In this paper, the restrictions for content placement policies in ICN-based IoT are outlined, and a data prioritization-based approach that prevents ICN nodes from filling their Content Store (CS) with inappropriate and inferior data is proposed to improve cache efficiency. Besides, fog computing is used as a middle layer between IoT devices and ICN networks to resolve the speed limits of decision-making and improve cache performance. In the fog layer, prioritization is performed based on the popularity and freshness of data object, and, thus, low-priority data are removed from the caching mechanism of ICN nodes. As the result shows the total number of cached data is reduced. Freshness is an essential property of multiple IoT data that significantly affects caching performance. That's why we prioritized data based on popularity and freshness. Eventually, the simulation results performed by the NDNsim simulator show that decreasing the number of data objects could improve the cache mechanism efficiency in terms of both delay and hit ratio. © 2022 Elsevier B.V.
Applied Soft Computing (15684946) 128
In cluster-based sensor networks, at each cluster, sensor nodes send the collected data to a cluster head which aggregates and forwards them to a sink node. Data transmission from a cluster head to the sink node can be done in a multi-hop fashion through other cluster heads. Hence, two problems need to be addressed in this regard: Selection of cluster heads, and optimal multi-hop routing. In previous studies, these two problems have been solved separately in two independent phases. This paper proposes a novel approach to solve them simultaneously in order to increase the network lifetime. In the proposed scheme, the cluster head's role in transmitting the inter-cluster traffic is considered during the cluster head selection process. In other words, cluster heads are selected in a way which reduces the energy consumption for transmitting data from a cluster head to the sink node. To achieve this goal, the genetic algorithm is used in two levels. The first-level genetic algorithm selects the cluster heads while the second-level one considers multi-hop routing among them. Simulation of the proposed method and comparison of its results with three previously proposed schemes which solve the problems separately indicate the superiority of the proposed optimization scheme in improving the lifetime of the network. © 2022 Elsevier B.V.
IEEE Transactions on Network and Service Management (19324537) 18(3)pp. 3918-3932
Congestion control has a vital role in the prosperity of any network. Thus, designing an effective mechanism for congestion control of the NDN is an active research area. In NDN, congestion control and interest forwarding mechanisms are usually considered an integrated plane for the network traffic management. In this paper, we propose to decouple the forwarding and congestion control planes. We offer a novel architecture for the NDN router in which congestion control and forwarding modules cooperate for interest flow management and congestion control. Based on this framework, we introduce a new explicit feedback-based congestion control protocol (3CP) for managing the consumers' sending rate in an NDN environment that multi-source and multi-path content delivery is intrinsically supported. 3CP employs a new per-packet feedback computation to inform consumers from the available resource of paths toward repositories. Also, it provides feedback to the forwarding mechanism of a router for adjusting the sending rate of interest packets to each interface. By interest flow management, 3CP controls the traffic of data packets in the network. As a significant achievement, 3CP prevents multi-path flows from acquiring more network resources and could manage fair resource allocation to the flows in each path, whereas the consumers obtain the same throughput. The packet-level simulation was conducted by an NDN simulator, and the results confirmed that 3CP outperforms the existing multi-source/multi-path congestion control mechanisms in NDN. © 2004-2012 IEEE.
Wireless Personal Communications (09296212) 119(2)pp. 1541-1575
Unmanned Aerial Vehicles (UAVs) are well-developed technologies that were first utilized for military applications such as border monitoring and reconnaissance in hostile territories. With the advancement of the Internet of Things (IoT) systems and smart mobile devices, several applications in various industrial, agricultural, smart homes, smart cities, smart transportation, etc. domains have emerged. These applications usually require broad coverage, high energy consumption, computation-intensive processing, and access to rich data gathered by sensor devices. UAVs’ inherent features such as high dynamicity, low deployment and operational costs, quick deployment, and line of sight communication have motivated researchers in the IoT domain to consider UAVs integration into IoT systems toward the notion of UAV-assisted IoT systems. In this paper, recent literature on UAV-assisted services in IoT environments is studied. A service-oriented classification is applied in order to categorize the presented schemes into four broad domains of UAV-assisted data-related services, UAV-assisted battery charging, UAV-assisted communications, and UAV-assisted Mobile Edge Computing (MEC). The literature belonging to each category is summarized with respect to their main points. Finally, some possible future directions are discussed to highlight the challenges associated with designing UAV-assisted IoT systems. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature.
Industrial control systems (ICS) are applied in many critical infrastructures. Reducing reconfiguration time after hazard leads to safety improvement, so it is one of the most important objectives in these systems. Hazards can be due to the 'system failure' or 'cyber-attacks' factors. One of the procedures that can reduce the reconfiguration time is determining as soon as possible the cause of hazards based on the above mentioned factors. Differentiation of attack from failure without redundant data in addition to data from the system sensors is not possible. With advent of the IoT as IIoT, a condition is developed to provide the required redundant data; however, by increasing the number of IIoT devices within a factory, the generated data volume becomes too large. In this paper we describe a fog-based approach applied in a factory to deal with such increasing complexity. We compare the proposed method with a traditional cloud-based solution. According to the results, the proposed method leads to a reduction of 60% lost time in the recovery reconfiguration step of the system. © 2020 IEEE.
Yazdinejad, A. ,
Parizi, R.M. ,
Bohlooli, A. ,
Dehghantanha, A. ,
Choo, K.R. Journal of Network and Computer Applications (10848045) 156
The emergence of new network technologies and users' ever-increasing demand necessitates the introduction of highly programmable hardware with high flexibility and performance at the network data plane. The switches at the data plane need to be flexible enough to support protocols and test new ideas for increasing the abstraction of network programming. In most studies, Field Programmable Gate Arrays (FPGA) are applied in making switches flexible and reprogrammable; however, applying FPGAs alone does not meet the required flexibility. Next to applying FPGAs, it is necessary to provide an architecture that would allow developers to forego FPGA implementation hardware details and the complexity of hardware description language (HDL). In this paper, a new architecture of a programmable packet processor with high flexibility and programmability at the network data plane is presented, which supports all three operations required in switches: parsing, classification, and processing of packet data. To implement this architecture, the high-level P4 language is applied to allow the description of its register-transfer level (RTL) on the FPGA. In order to increase the processing speed, a pipeline approach within the proposed architecture is designed at the SDN data plane through pre-processing in the parse graph, identifying the traffic flow, and applying the hybrid control flow model in the data processing. The results show that our architecture operates at 320 MHz clock speed, which in comparison with NetFPGA-10G, NetFPGA-SUME, and ML605 peer architectures runs 2, 1.28, and 2.9 times faster in terms of processing speed, attesting to its efficiency. In addition, the evaluation on the Virtex-7 FPGA VC709 platform shows that our architecture consumes approximately 4.3% of lookup tables, 1.9% of flip-flops, and 1.3% of memory blocks, which are less than the hardware resource consumption of peer architectures. © 2020 Elsevier Ltd
Future Generation Computer Systems (0167739X) 106pp. 518-533
Transport Control in Named Data Networking (NDN) architecture is a challenging task. The lack of end-to-end communications in this architecture makes traditional, timeout-driven transport control schemes inefficient and wasteful. Hop-by-hop transport control is an alternative solution to tackle this problem that because of the stateful forwarding plan of NDN can be applied more easily than the IP networks. Most existing solutions in this direction assume known link bandwidths and Data packet sizes or require a loop-free multipath forwarding strategy to work well, but these assumptions do not always hold true and there exist no a loop-free multipath forwarding strategy among the existing forwarding strategies for NDN. In this paper, a Responsibility-based Transport Control (RTC) protocol for NDN is proposed. This protocol does not make strong assumptions about the network and avoids looping paths by applying a window-based rate control mechanism and a capacity-aware, multipath forwarding strategy in each face. In RTC, routers maintain a congestion window in each face and decide on accepting or refusing to take responsibility for forwarding of a newly received Interest packet through exchanging three new control packets. These control packets provide reliable information for managing the congestion windows and capacity-aware traffic splitting in routers. They also enable diverse deployment scenarios for NDN such as IP-overly and wireless links. The RTC is implemented in ndnSIM and its capability in managing congestion, achieving high throughput and providing flow fairness are demonstrated through extensive simulations. © 2020 Elsevier B.V.
Multimedia Tools and Applications (13807501) 79(43-44)pp. 32999-33021
Distributed Video Coding with low computational complexity at the encoder side has a high potential for use in Wireless Multimedia Sensor Networks. However, the different architecture of this coding and resource constraints in WMSNs require the design of new efficient transmission protocols for transmission of DVC through WMSNs. In view of these protocols, error control mechanisms have more importance in reliable multimedia communication over WMSN. These mechanisms provide higher video quality in receiver nodes while saving the energy of sender nodes by the reliable transmission of packets. Given the importance of this issue, in this paper, we propose an adaptive, cross-layer error control scheme to protect video frames in the transmission of DVC over WMSN, which serves QOS while considering energy consumption and frames’ delay constraints. To propose this scheme, we used the accurate results from our previous works on error resiliency of DVC and comparative performance analysis of error control methods for this codec. The proposed scheme has been analyzed and compared to all standard, layer and, multi-layer error control schemes against the most important criteria in video communication over WSNs such as energy consumption, delay, and PSNR. Simulation results show that this scheme saves the quality of video in different channel conditions by consuming the least possible amount of energy based on the maximum allowable delay at the receiver. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.
IEEE Access (21693536) 8pp. 206942-206957
Safety and security of Industrial Control Systems (ICS) applied in many critical infrastructures is essential. In these systems, hazards can be due either to system failure or cyber-attacks factors. Accurate hazard detection and reducing reconfiguration time after hazard is one of the most important objectives in these systems. One of the procedures that can reduce the reconfiguration time is determining the cause of hazards and, based on the aforementioned factors, adopting the best commands in reconfiguration time. However, it is difficult to differentiate between different types of hazard because their effects on the system can be similar. With the advent of IoT into ICS, known as IIoT, it has become possible to differentiate the hazards through the adoption of data from different IIoT sensors in the environment. In this article, we propose a risk management approach that identifies hazards based on the physical nature of these systems with the support from the IIoT. The identified hazards fall into four categories: stealthy attack, random attack, transient failure, and permanent failure. Then, the reconfiguration process is run based on the proposed differentiation, which provides a better performance and reconfiguration time. In the experimental section, a fluid storage system is simulated, showing 97% correct differentiation of hazards and reducing in 60% the lost time in the system recovery reconfiguration. © 2013 IEEE.
Applied Soft Computing (15684946) 89
Energy-efficient and robust face detection and recognition scheme can be useful for many application fields such as security and surveillance in multimedia and visual sensor network (VSN). VSN consists of wireless resources-constrained nodes that are equipped with low-energy CMOS cameras for monitoring. On the one hand, captured images are meaningful multimedia-data that impose high energy consumption to be processed and transmitted. On the other hand, visual sensor (VS) is a battery-powered node with limited life-time. This situation leads to a trade-off between detection-accuracy and power-consumption. This trade-off is considered as the most major challenge for applications using multimedia data in wireless environments such as VSN. For optimizing this trade-off, a novel face detection and recognition scheme has been proposed in this paper based on VSN. In this scheme, detection phase is performed at VS and recognition phase is accomplished at the base station (sink). The contributions of this paper are in three folds: 1. Fast and energy-aware face-detection algorithm is proposed based on omitting non-human blobs and feature-based face detection in the considered human-blobs. 2. A novel energy-aware and secure algorithm for extracting light-weight discriminative vector of detected face-sequence to be sent to sink with low transmission-cost and high security level. 3. An efficient face recognition algorithm has been performed on the received vectors at the sink. The performance of our proposed scheme has been evaluated in terms of energy-consumption, detection and recognition accuracy. Experimental results, performed on standard datasets (FERET, Yale and CDnet) and on personal datasets, demonstrate the superiority of our scheme over the recent state-of-the-art methods. © 2019
MICROELECTRONICS JOURNAL (00262692) 101
One of the problems with the Controller Area Network is jitter of response time because of the bit-stuffing mechanism. In real-time cases where timing accuracy is important, jitter may deteriorate the quality of the control procedure noticeably. A new method based on XOR masking, named Statistical Mask Calculation (SMC), is presented in this study. The method uses the statistical parameters of data to generate a proper mask for each case. The performance of the proposed method on the reduction of the number of stuff bits is evaluated by considering real data set and a comparison with existing original XOR masking is done. The results indicate that applying the proposed method on the case study increases the probability of zero bit stuffing up to 46% compared to the original XOR mask. It should be noted that the proposed technique is more effective for systems that are usually predictable. Therefore, the adaption of this technique depends on the required application. © 2020 Elsevier Ltd
IEEE Access (21693536) 8pp. 133817-133826
In this paper, a new approach is introduced to reduce bit stuffing and consequently residual error probability in the controller area network (CAN). The proposed method is based on XOR masking. Unlike the XOR method, the proposed approach does not use a fixed mask for all IDs. Using statistical parameters of data, a proper mask for each CAN ID is generated and applied to the messages before transmitting. The performance of the method to reduce bit stuffing and residual error probability has been evaluated by considering a real data set. Results show that the method can significantly reduce bit stuffing and residual error probability. A comparison has been also conducted with previously reported methods. The results show the superiority of the SMC method in reducing residual error without payload and data transfer rate reduction. © 2013 IEEE.
MICROELECTRONICS JOURNAL (00262692) 85pp. 62-71
CNFETs (Carbon Nanotube Field Effect Transistors) are among the most outstanding candidates to replace with current semiconductor technology. The facing challenges of this newly introduced nanotechnology like metallic CNT (Carbon nanotube) and misaligned and mispositioned CNTs are considered as obstacles in mass production of circuits based on CNFET. In the present article, first, the correlation between methods of CNFET-based designs circuits and misalignment and mispositioning of CNTs occurrence are assessed in the fabrication phase, and then an approach is propose, which may deal with and eliminate the effects of this challenge. This method is introduced at design level, which is immune against misaligned and mispositioned CNTs and due to the lack of complexity in its layout, it is compatible with recent techniques in eliminating metallic CNT, in a sense that, the application of such techniques does not require change in layout. To evaluate circuit parameters in circuits designed with this proposed method, together with evaluating their tolerance against variations in CNT diameter and density and supply voltage variation a full adder is designed based on this proposed method. The various simulations prove the efficiency of this proposed method and the improvement of circuit parameters compared to previous studies. © 2019 Elsevier Ltd
International Journal of Communication Systems (10991131) 32(18)
Information-centric networking (ICN) has emerged as a promising candidate for designing content-based future Internet paradigms. ICN increases the utilization of a network through location-independent content naming and in-network content caching. In routers, cache replacement policy determines which content to be replaced in the case of cache free space shortage. Thus, it has a direct influence on user experience, especially content delivery time. Meanwhile, content can be provided from different locations simultaneously because of the multi-source property of the content in ICN. To the best of our knowledge, no work has yet studied the impact of cache replacement policy on the content delivery time considering multi-source content delivery in ICN, an issue addressed in this paper. As our contribution, we analytically quantify the average content delivery time when different cache replacement policies, namely, least recently used (LRU) and random replacement (RR) policy, are employed. As an impressive result, we report the superiority of these policies in term of the popularity distribution of contents. The expected content delivery time in a supposed network topology was studied by both theoretical and experimental method. On the basis of the obtained results, some interesting findings of the performance of used cache replacement policies are provided. © 2019 John Wiley & Sons, Ltd.
The RPL protocol was provided for routing in the Internet of Things (IoT) network. This protocol may be under attack. One of the attacks in the RPL protocol is the sinkhole attack that, an attacker tries to attract nearby nodes and, as a result, it causes that many nodes pass their traffic through the attacker node. In the previous methods for detecting a sinkhole attack in the RPL protocol, the accuracy of the detection parameter has been important. In the present study, by providing a local detection method called DEEM and improving the overhead in terms of energy consumption associated with the detection method, also a proper detection accuracy was obtained. DEEM has two phases in each node called Information Gathering and Detection Phases. We implemented DEEM on Contiki OS and evaluated it using the Cooja simulator. Our assessment shows that, in simulated scenarios, DEEM has a low overhead in term of energy consumption, a high true positive rate, and a good detection speed, and this is a scalable method. The cost of DEEM overhead is small enough to be deployed in resource-constrained nodes. © 2019 IEEE.
Multimedia Tools and Applications (13807501) 78(13)pp. 18921-18941
Energy-efficiency in visual surveillance is the most important issue for wireless multimedia sensor network (WMSN) due to its energy-constraints. This paper addresses the trade-off between detection-accuracy and power-consumption by presenting an energy-aware scheme for detecting moving target based on clustered WMSN. The contributions of this paper are as follows; 1- An adaptive clustering and nodes activation approach is proposed based on residual energy of detecting nodes and the location of the object at the camera’s field of view (FoV). 2- An effective cooperative features-pyramid construction method for collaborative target identification with low communication cost. 3- An in-network collaboration mechanism for cooperative detection of the target is proposed. The performance of this scheme is evaluated using both standard datasets and personal recorded videos in terms of detection-accuracy and power-consumption. Compared with state-of-the-art methods, our proposed strategy greatly reduces energy-consumption and saves more than 65% of the network-energy. Detection-accuracy rate of our strategy is 11% better than other recent works. We have increased the Precision of classification up to 49% and 65% and the Recall of classification up to 53% and 71% for specific-target and object-type respectively. These results demonstrate the superiority of our scheme over the recent state-of-the-art works. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
The architecture of current networks is static and nonprogrammable. In Software-Defined Networking (SDN) it is possible to have programmability and innovation within the network. In SDN, data plane and control plane are separated. So, the network operators can manage the network behavior using software. Some standardized interfaces such as OpenFlow are developed to enable the interaction among the controller and switches. OpenFlow switch manages the network traffic at the data plane. Packet parser is one of the main parts of OpenFlow switch. So far, some FPGA implementations have been presented for packet parsers for the SDN which lack required flexibility and programmability. These implementations support only one parse graph which causes limitations in creating new network protocols for new versions of the OpenFlow switch. To address this problem, in the present study the automatic generation of a programmable packet parser is presented for the OpenFlow switch. In addition to creating high flexibility in the switch, our proposed implementation is programmable and support different parse graph during the execution time. Simulation and implementation verify the appropriate performance of this programmable parser in OpenFlow Switch. Our implementation is able to improve the performance of the switch and enhance the flexibility of the data plane of SDN. The use of the programmable packet parser results in the increase in the switch speed, the reduction in wait time and service time, and the improvement in the Openflow switch performance. © 2019 IEEE.
Computer Networks (13891286) 140pp. 152-162
Routers in information-centric networking (ICN) are allowed to maintain the information of several repositories for every content item in their FIB. Therefore, content could be provided from different locations simultaneously, when a consumer asks for it. This multi-source property of content could benefit a network in terms of load balancing, congestion control, resource management and so on. Since exploiting multi-source content delivery has paramount importance, this work is an effort to investigate the performance of multi-source content delivery in the ICN. Existing works for modeling content transportation in ICN do not consider multi-source content delivery nor are straightforward enough to be used. Specially, the existing works do not consider the role of forwarding mechanisms in their models, an issue addressed in this paper. We develop a novel analytical model to evaluate the performance of multi-source content delivery in the ICN. We provide a recursive-style function to calculate virtual round-trip time (VRTT) between a consumer and the collection of providers which host the requested content by the consumer. Other metrics such as content transfer time can be obtained by the estimated VRTT. The proposed model was evaluated via both numerical and simulation methods. Packet-level experiments were performed by an ICN simulator (ndnSIM), and the result demonstrated the accuracy of the proposed model. Moreover, we discussed the distinct features of our approach for modeling the functionality of forwarding mechanisms in ICN, extensively. © 2018 Elsevier B.V.
Journal of Supercomputing (15730484) 74(3)pp. 1299-1320
Software-defined network (SDN) has had the evolution of the current network with the aim of removing its restrictions so that the data plane has been separated from its control plane. In the architecture of the SDN, the most controversial device is the OpenFlow Switch in that in the OpenFlow Switch, it is packets which are processed and investigated. Now, OpenFlow Switch versions 1.0 and 1.1 have been implemented on hardware platforms and support limited specifications of the OpenFlow. The present article is to design and implement the architecture of the OpenFlow v1.3 Switch on the Virtex ®-6 FPGA ML605 board because the FPGA platform has high flexibility, processing speed and reprogrammability. Although little research investigated performance parameters of the OpenFlow Switch, in the present study, the OpenFlow system (switch and controller) is to be implemented on the FPGA via the VHDL on the one hand, and performance parameters of the OpenFlow Switch and its stimulation performance is to be investigated via the ISE design suite on the other hand. In addition to enjoying high flexibility, this architecture has a consumer hardware at the level of other start-ups. The main advantage of the proposed design is that it increases the speed of packet pipeline processing in flow tables switch. Besides, it supports the features of the OpenFlow v1.3. Its parser supports 40 packet headers in the network and provides the possibility of switch development for next versions of the OpenFlow as easily as possible. © 2017, Springer Science+Business Media, LLC.
International Journal of Artificial Intelligence (09740635) 16(1)pp. 41-59
Considering the importance of energy consumption of cluster heads and the impact on the lifetime of cluster heads on the network lifetime, this paper addresses the issue of multi-hop inter-cluster routing. In the procedure proposed by the present study, first some analyses were done demonstrating that selecting the next hop for each cluster head is influenced by selecting the next hops for other cluster heads; therefore, to perform the inter-cluster routing, the dependency between the choices of next hops should be taken into consideration. Afterwards, we apply an evolutionary algorithm that is capable of taking these conditional dependencies into account in finding optimal routes between the cluster heads. In this evolutionary algorithm, the network lifetime is used as the fitness function for evaluating the solutions. The proposed method is evaluated using simulation and is compared to a classic routing method and two methods based on genetic algorithm (which lack the ability to take conditional dependencies into consideration). The results reveal that the proposed method improves the network lifetime. © 2018 [International Journal of Artificial Intelligence].
Nowadays network managers look for ways to change the design and management of networks that can make decisions on the control plane. Future switches should be able to support the new features and flexibility required for parsing and processing packets. One of the critical components of switches is the packet parser that processes the headers of the packets to be able to decide on the incoming packets. Here the data plane, and particularly packet parser in OpenFlow switches, which should have the flexibility and programmability to support the new requirements and OpenFlow multiple versions, are focused. Designed here is an architecture that unlike the static network equipments, it has the flexibility and programmability in the data plane network, especially the SDN network, and supports the parsing and processing of specific packets. To describe this architecture, a high-level P4 language is used to implement it on a reconfigurable hardware (i.e., FPGA). After automatic generating the protocol-independent Packet parser architecture on the Virtex-7, it is compiled to firmware by Xilinx SDNet, and ultimately an FPGA Platform is implemented. It has fewer consumption resources and it is more efficient in terms of throughput and processing speed in comparison with other architectures. © 2018 IEEE.
Multimedia Tools and Applications (13807501) 77(15)pp. 19547-19568
Distributed Video Coding (DVC) is a new approach in video coding which due to low computational complexity at the encoder side, has a great potential to be used in Wireless Multimedia Sensor Networks (WMSN). However, the different architecture of this codec affects the efficiency of transmission protocols and in order to efficient transmission of DVC over WMSN, it is necessary to evaluate the performance of the transmission protocols in the presence of DVC characteristics. In the view of these protocols, error control methods are important mechanisms that provide quality of service and robust multimedia communications. For this reason, we performed a comparative performance analysis for all error control schemes that consist of Automatic Repeat Request (ARQ), Forward Error Correction (FEC), Erasure Coding (EC), hybrid link layer ARQ/FEC and multi-layer hybrid error control schemes for DVC in WMSNs. These analyses are in the terms of the most importance metrics in multimedia communications over WSNs, such as objective and subjective video quality criteria, delay, energy consumption and some DVC-specific metrics. The results show the distinct behavior of DVC in the presence of channel error and can be used to propose an effective and efficient error control scheme for DVC over WMSN. © 2017, Springer Science+Business Media, LLC, part of Springer Nature.
One of the most widely used applications of wireless networks is in unmanned aeronautical ad-hoc networks. In these networks, the flying nodes in the mission must send their information to the ground base. If a UAV is outside the coverage of the ground base, it loses its connection. The solution is to send the information to the neighboring nodes. These neighboring nodes redirect the information to the ground base. Due to high node dynamics and rapid changes in network topology, one of the biggest concerns in these networks is routing between nodes. Previous routing methods, although making improvement in the overall performance in these networks led to routing overhead and network delay. In the present study a new routing method is introduced. In this new method a routing algorithm is presented with a focus on improving the packet delivery ratio and throughput therefore reducing the end to end delay and network overhead. In the proposed method instead of using only one route between nodes, all discovered routes in the network are kept in the nodes routing table. Then the best route is used as the first proposed route between the source and destination nodes and after failing this route, the second route is utilized immediately. This decreases the broadcasting of route discovery packets through the network. According the simulation results, the proposed method has proved more efficient. There has been an increase in packet delivery ratio by 4% in average, in end to end delay approximately by 30% and in the throughput ratio of the network by 9% in comparison with other methods in different scenarios. © 2017 IEEE.
International Journal of Engineering, Transactions B: Applications (1728144X) 30(11)pp. 1714-1722
Software Defined Network (SDN) is a new architecture for network management and its main concept is centralizing network management in the network control level that has an overview of the network and determines the forwarding rules for switches and routers (the data level). Although this centralized control is the main advantage of SDN, it is also a single point of failure. If this main control is made unreachable for any reason, the architecture of the network is crashed. A distributed denial of service (DDoS) attack is a threat for the SDN controller which can make it unreachable. In the previous researches in DDoS detection in SDN, not enough work has been done on improvement of accuracy in detection. The proposed solution of this research can detect DDoS attack on SDN controller with a noticeable accuracy and prevents serious damage to the controller. For this purpose, fast entropy of each flow is computed at certain time intervals. Then, by the use of adaptive threshold, the possibility of a DDoS attack is investigated. In order to achieve more accuracy, another method, computing flow initiation rate, is used alongside. After observation of the results of this two methods, according to the described conditions, the existence of an attack is confirmed or rejected, or this decision is made at the next step of the algorithm, with further study of flow statistics of network switches by the perceptron neural network. The evaluation results show that the proposed algorithm has been able to make a significant improvement in detection rate and a reduction in false alarm rate compared to closest previous work, besides maintaining the average detection time on an acceptable level.
MICROELECTRONICS JOURNAL (00262692) 53pp. 156-166
A new voltage mode design is presented for quaternary logic using CNTFETs. This architecture with presentation of a new structure for voltage division can be applied on any four-valued logic implementation. To ensure the functionality of this promising proposed architecture, basic gates, half-adder, and full-adder are implemented using voltage divider. Moreover, a decoder is considered to enhance the parameters of half-adder such as power consumption, delay, and number of transistors. The designs are simulated using Hspice simulation tool. In comparison with prior works, our half-adder design is optimized by 75.2%, 7.8% and 77% in power consumption, delay and PDP parameters, respectively. © 2016 Elsevier Ltd. All rights reserved.
Journal of the Chinese Institute of Engineers, Transactions of the Chinese Institute of Engineers,Series A (02533839) 39(4)pp. 493-497
Wireless sensor networks (WSNs) consist of small nodes that are capable of sensing, computing, and communication. One of the greatest challenges in WSNs is the limitation of energy resources in nodes. This limitation applies to all of the protocols and algorithms that are used in these networks. Routing protocols in these networks should be designed considering this limitation. Many papers have been published examining low energy consumption networks. One of the techniques that has been used in this context is cross-layering. In this technique, to reduce the energy consumption, layers are not independent but they are related to each other and exchange information with each other. In this paper, a cross-layer design is presented to reduce the energy consumption in WSNs. In this design, the communication between the network layer and medium access layer has been established to help the control of efforts to access the line to reduce the number of failed attempts. In order to evaluate our proposed design, we used the NS2 software for simulation. Then, we compared our method with a cross-layer design based on an Ad-hoc On-demand Distance Vector routing algorithm. Simulation results show that our proposed idea reduces energy consumption and it also improves the packet delivery ratio and decreases the end-to-end delay in WSNs. © 2016 The Chinese Institute of Engineers.
Ad Hoc Networks (15708705) 25(PB)pp. 472-479
Opportunistic routing is a promising routing paradigm that achieves high throughput by utilizing the broadcast nature of wireless media. It is especially useful for wireless mesh networks due to their static topology. In the current opportunistic routing protocols, it is assumed that all nodes have enough incentive and resource to help the source regardless of their load and presence of other network flows. In addition, the effect of each active flow on other flows and network status is reflected latter by means of a link quality metric (e.g. ETX) which is updated periodically. The coarse-grained behavior of the metric is not in harmony with network flows dynamics. Therefore, some flows may undergo performance degradation between two consecutive periodic updates of the metric. Our proposed approach which is called Dynamic Cooperative Routing (DCR) modifies MORE and equips it with an adaptive decision making mechanism. We use learning automata to accommodate network dynamics when building an opportunistic path for a flow. The learning automata are activated whenever the source transmits a new data batch for the flow. We have shown through simulation that DCR outperforms MORE when two or more flows are active simultaneously and in the presence of background unicast traffic. © 2014 Elsevier B.V. All rights reserved.
Science China Information Sciences (1674733X) 57(6)pp. 1-11
One of the main problems in the VANET (vehicular ad-hoc network) routing algorithms is how to establish the stable routes. The link duration in these networks is often very short because of the frequent changes in the network topology. Short link duration reduce the network efficiency. Different speeds of the vehicles and choosing different directions by the vehicles in the junctions are the two reasons that lead to link breakage and a reduction in link duration. Several routing protocols have been proposed for VANET in order to improve the link duration, while none of them avoids the link breakages caused by the second reason. In this paper, a new method for routing algorithms is proposed based on the vehicles trips history. Here, each vehicle has a profile containing its movement patterns extracted from its trips history. The next direction which each vehicle may choose at the next junction is predicted using this profile and is sent to other vehicles. Afterward each vehicle selects a node the future direction of which is the same as its predicted direction. Our case study indicates that applying our proposed method to ROMSGP (receive on most stable group-path) routing protocol reduces the links breakages and increases the link duration time. © 2014 Science China Press and Springer-Verlag Berlin Heidelberg.
Applied Intelligence (0924669X) 36(3)pp. 685-697
Predicting the next movement directions, which will be chosen by the vehicle driver at each junction of a road network, can be used largely in VANET (Vehicular Ad-Hoc Network) applications. The current methods are based on GPS. In a number of VANET applications the GPS service is faced with some obstacles such as high-rise buildings, tunnels, and trees. In this paper, a GPS-free method is proposed to predict the vehicle future movement direction. In this method, vehicle motion paths are described by using the sequence of turning directions on the junctions, and the distances between the junctions. Movement patterns of the vehicles are extracted through clustering of the vehicle's motion paths using SOM (Self Organizing Map). These patterns are then used for predicting the next movement direction, which will be chosen by the driver at the next junction. The obtained results indicate that our GPS-free method is comparable with the GPS-based methods, while having more advantages in different applications regarding urban traffic. © 2011 Springer-Verlag.
IEEJ Transactions on Electrical and Electronic Engineering (19314973) 7(3)pp. 329-333
Vehicular ad hoc networks will enable a variety of applications for safety, traffic efficiency, driver assistance, and infotainment in modern automobile designs. For many of these applications as well as for improving the stability of vehicular ad hoc network routing algorithms, it is necessary to know whether a steering wheel rotation has led to a change in the vehicle motion path. The problem is that some steering wheel rotations are temporary and do not lead to a change in the vehicle motion path. In this paper, a GPS-free fuzzy sensor is designed for detecting the change of vehicle motion paths. The implementation results show acceptable precision. © 2012 Institute of Electrical Engineers of Japan.
Wireless sensor networks are networks with nodes of low power and limited processing. Therefore, optimal consumption of energy for WSN protocols seems essential. In a number of WSN applications, sensor nodes sense data periodically from environment and transfer it to the sink. Because of limitation in energy and selection of best route, for the purpose of increasing network lifetime a node with most energy level will be used for transmission of data. The most part of energy in nodes is wasted on radio transmission; thus decreasing number of transferred packets in the network will result in increase in node and network lifetimes. In algorithms introduced for data transmission in such networks up to now, a single route is used for data transmissions that results in decrease in energy of nodes located on this route which in turn results in shortened network lifetime. In this paper a new method is proposed for selection of data transmission route that is able to solve this problem. This method is based on learning automata that selects the route with regard to energy parameters and the distance to sink. In this method energy of network nodes finishes rather simultaneously preventing break down of network into 2 separate parts. This will result in increased lifetime. Simulation results show that this method has been very effective in increasing network lifetime. © 2010 IEEE.
In this paper some parameters that reasons data redundancy and kinds of data redundancy in wireless sensor networks are defined, then introduced a clustering algorithm for data reduction in wireless sensor network using benefits of sensor network such as each node receives its neighbor's data, correlation between neighbors and low rate of changing in environmental data. New introduced algorithm made better using of energy and bandwidth that are two restrictions in wireless sensor networks. Simulation results show that about 30% to 80% approve in energy consumption will be attain in new introduced algorithm for environmental low changing rate data. ©2009 IEEE.
The basis of procedure in query based sensor networks is such that the component nodes of network wait for receiving query from sink and once they took the query, they provide sink with required data. An issue in this kind of networks is the way that location of responder node is calculated. This requires addition of localization mechanisms or localization hardware. The former has communication overhead and the latter has cost overhead and power consumption, both of which result in reduction in life time of the network. Since in most of these networks the exact location of nodes is not required and having the approximate location of nodes is sufficient, a model is presented in this paper that calculates the approximate location of nodes. In this model nodes can obtain their approximate location without any need to extra hardware. In this model each node can obtain its location using query packets received from different sinks. Thus there is no need to communication overhead for calculating the location as well. © 2009 IEEE.
One of the attacks in the RPL protocol is the Clone ID attack, that the attacker clones the node's ID in the network. In this research, a Clone ID detection system is designed for the Internet of Things (IoT), implemented in Contiki operating system, and evaluated using the Cooja emulator. Our evaluation shows that the proposed method has desirable performance in terms of energy consumption overhead, true positive rate, and detection speed. The overhead cost of the proposed method is low enough that it can be deployed in limited-resource nodes. The proposed method in each node has two phases, which are the steps of gathering information and attack detection. In the proposed scheme, each node detects this type of attack using control packets received from its neighbors and their information such as IP, rank, Path ETX, and RSSI, as well as the use of a routing table. The design of this system will contribute to the security of the IoT network. © 2021 IEEE.
Industry 4.0 provides a framework for applying new technologies in industrial environments to boost the efficiency and intelligence. A recently blossomed technology in Industry 4.0 is Internet of Things (IoT), which allows us to create a smart environment by connecting various equipment. One of the main applications of IoT in a smart factory is to design monitoring systems, which helps put the behavior of devices under permanent and comprehensive supervision. However, the rapid growth and change in the monitoring facilities creates a big challenge for people who either want to use that equipment in Industry 4.0, or want to update the systems to benefit from this technology. To address this problem, this paper presents new approach based on model-driven engineering paradigm, for simplifying the design and development of real-Time monitoring systems in an industrial environment. Our approach includes a domain-specific modeling language, a graphical editor, and model-To-code transformations that generate a hardware descriptive code, a mobile application, and a web application for a monitoring system. To evaluate the applicability of our approach, a scenario in the power industry has been designed, which offers user a VHDL code, a mobile application, and a web application for monitoring processes of the plant. © 2020 IEEE.
The considerable growth of the number of networked devices in the world has led to the development of various and new programs in the field of IoT, which are often limited to the current network infrastructure, on the other hand, force the network administrator to implement complex network policies manually. Due to this congestion of equipment as well as the increasing complexity of traditional network configuration, Software-Defined Networks (SDNs) facilitate network management by separating the control and data layers and creating network rules. For these facilities, these networks appear to be a good infrastructure for IoT networks will enable network programming to develop new and more efficient services to meet real needs. In addition, the variety of IoT equipment can increase complex and inconsistent network rules in SDN-based switches, making network management difficult. Accordingly, in this paper, we will try to model the behavior of anomaly rules distributed in software-defined networks that have been created by different apps in the Internet of Things. It can identify their relationship with other rules in the network and avoid registering them. © 2021 IEEE.
Although high-performance artificial intelligence (AI) models require substantial computational resources, embedded systems are constrained by limited hardware capabilities, such as memory and processing power. On the other hand, embedded systems have a broad range of applications, making the integration of AI and embedded systems a prominent topic in both hardware and AI research. Creating powerful speech embeddings for embedded systems is challenging, as such models, like Wave2Vec, are typically computationally intensive. Additionally, the scarcity of data for many low-resource languages further complicates the development of high-performance models. To address these challenges, we utilized BERT to generate speech embeddings. BERT was selected because, in addition to producing meaningful embeddings, it is trained on numerous low-resource languages and facilitates the design of efficient decoders. This study introduces a compact speech encoder tailored for low-resource languages, capable of functioning as an encoder across a diverse range of speech tasks. To achieve this, we utilized BERT to generate meaningful embeddings. However, due to the high dimensionality of BERT embeddings, which imposes significant computational demands on many embedded systems, we applied dimensionality reduction techniques. The reduced-dimensional vectors were subsequently used as labels for speech data to train a model composed of convolutional neural networks (CNNs) and fully connected layers. Finally, we demonstrated the encoder's effectiveness through an application in speech command recognition. © 2024 IEEE.
Nowadays, Multi-hop wireless networks have achieved lots of attention due to their ease of development, low cost, and other advantages. Wireless channels have broadcast nature, and a sent packet can be heard by the nodes in the sender's transmission range. This feature is used in opportunistic routing to forward the packets and to enhance the network efficiency. In most of the opportunistic routing algorithms, forwarder nodes are pre-selected by the source nodes. Forwarders should be coordinated for the packet forwarding and one of them is finally selected as the next hop. If the forwarder list is large, coordination's computational overhead will be high. A new Energy efficient opportunistic routing algorithm, named EOpR is presented in this paper that selects the candidate nodes on the packets' fly. This selection is based on the region and the nodes' residual energy. Candidate nodes set a timer and the one whose timer expires first is selected as the next hop. Simulation results showed higher network performance in the terms of network's lifetime and also throughput compared to ROMER. The number of duplicate packets also decreases in EOpR. © 2015 IEEE.
Information-Centric Networking (ICN) is focused on content itself as the key factor of communication instead of network addresses. As a successful nominee for future architecture on the Internet, ICN provides a networking paradigm shift from host-oriented to content-oriented communication. This means that a user can declare its desired content by the unique name of that content irrespective of the hosting location. ICN provides high performance content distribution framework, stronger security solutions, better mobility support and scalable network architecture. It supports different naming schemes encompassing flat, hierarchical, hybrid, and attribute-value names. These properties construct ICN as an appropriate networking infrastructure for IoT applications such as smart city. ICN can better handle large IoT name spaces with lower processing resource usage. It reduces energy consumption by in-network caching of contents. Considering an NDN-based smart city, the available naming schemes can be classified into hybrid and hierarchical names. The disadvantages of the proposed naming schemes can be summarized as the long length of the names in hierarchical approach, the difficulty of finding unique content in attributed-value naming scheme, not being user-friendly in flat naming method, and complexity in hybrid naming structures. Considering these drawbacks, we presented a hybrid name scheme for the smart city by PURSUIT architecture that provides faster name lookup in IoT communications. © 2020 IEEE.
Wireless applications have become significant in numerous fields [1] such as the auto industry. Indeed, the convergence of telecommunication, computation, wireless technology, and transportation technologies has contributed to the facilitation of our roads and highways as far as communications are concerned. This convergence in a sense is considered as a platform in intelligent transportation systems (ITS) where each vehicle is assumed to be equipped with devices as nodes in order to create contact with other nodes. Mobile ad hoc networks (MANETs) were introduced in Chapter 3. Because the features of a vehicle network are different from those of other types of MANETs, this network is called a vehicular ad hoc network (VANET) [2]. © 2017 by Taylor & Francis Group, LLC.