مرتب سازی بر اساس:
Computer Communications (1873703X)238
Deploying multiple controllers in the control panel of software-defined networks increases scalability, availability, and performance, but it also brings challenges, such as controller overload. To address this, load-balancing techniques are employed in software-defined networks. Controller load balancing can be categorized into two main approaches: (1) single-level thresholds and (2) multi-level thresholds. However, previous studies have predominantly relied on single-level thresholds, which result in an imprecise classification of controllers or have assumed uniform controller capacities in multi-level threshold methods. This study explores controller load balancing with a focus on utilizing multi-level thresholds to accurately assess controller status. Switch migration operations are utilized to achieve load balancing, considering factors such as the degree of load imbalance of the target controller and migration efficiency. This includes evaluating the post-migration status of the target controller and the distance between the migrating switch and the target controller to select the appropriate target controller and migrating switch. The proposed scheme reduces controller response time, migration costs, communication overhead, and throughput rate. Results demonstrate that our scheme outperforms others regarding response time and overall performance. © 2025 Elsevier B.V.
Jahani, M.,
Zojaji, Z.,
Montazerolghaem, A.,
Palhang, M.,
Ramezani, R.,
Golkarnoor, A.,
Safaei, A.A.,
Bahak, H.,
Saboori, P.,
Halaj, B.S. Journal Of Medical Signals And Sensors (22287477)15(1)
Background: The pharmaceutical industry has seen increased drug production by different manufacturers. Failure to recognize future needs has caused improper production and distribution of drugs throughout the supply chain of this industry. Forecasting demand is one of the basic requirements to overcome these challenges. Forecasting the demand helps the drug to be well estimated and produced at a certain time. Methods: Artificial intelligence (AI) technologies are suitable methods for forecasting demand. The more accurate this forecast is the better it will be to decide on the management of drug production and distribution. Isfahan AI competitions-2023 have organized a challenge to provide models for accurately predicting drug demand. In this article, we introduce this challenge and describe the proposed approaches that led to the most successful results. Results: A dataset of drug sales was collected in 12 pharmacies of Hamadan University of Medical Sciences. This dataset contains 8 features, including sales amount and date of purchase. Competitors compete based on this dataset to accurately forecast the volume of demand. The purpose of this challenge is to provide a model with a minimum error rate while addressing some qualitative scientific metrics. Conclusions: In this competition, methods based on AI were investigated. The results showed that machine learning methods are particularly useful in drug demand forecasting. Furthermore, changing the dimensions of the data features by adding the geographic features helps increase the accuracy of models. © 2025 Journal of Medical Signals & Sensors.
Davanian, F.,
Adibi, I.,
Tajmirriahi, M.,
Monemian, M.,
Zojaji, Z.,
Montazerolghaem, A.,
Asadinia, M.A.,
Mirghaderi, S.M.,
Esfahani, S.A.N.,
Kazemi, M. Journal Of Medical Signals And Sensors (22287477)15(2)
Background: Multiple sclerosis (MS) is one of the most common reasons of neurological disabilities in young adults. The disease occurs when the immune system attacks the central nervous system and destroys the myelin of nervous cells. This results in appearing several lesions in the magnetic resonance (MR) images of patients. Accurate determination of the amount and the place of lesions can help physicians to determine the severity and progress of the disease. Method: Due to the importance of this issue, this challenge has been dedicated to the segmentation and localization of lesions in MR images of patients with MS. The goal was to segment and localize the lesions in the flair MR images of patients as close as possible to the ground truth masks. Results: Several teams sent us their results for the segmentation and localization of lesions in MR images. Most of the teams preferred to use deep learning methods. The methods varied from a simple U-net structure to more complicated networks. Conclusion: The results show that deep learning methods can be useful for segmentation and localization of lesions in MR images. In this study, we briefly described the dataset and the methods of teams attending the competition. © 2025 Journal of Medical Signals & Sensors.
Sedighin, F.,
Monemian, M.,
Zojaji, Z.,
Montazerolghaem, A.,
Asadinia, M.A.,
Mirghaderi, S.M.,
Esfahani, S.A.N.,
Kazemi, M.,
Mokhtari, R.,
Mohammadi, M. Journal Of Medical Signals And Sensors (22287477)15(1)
Background: Computer-aided diagnosis (CAD) methods have become of great interest for diagnosing macular diseases over the past few decades. Artificial intelligence (AI)-based CADs offer several benefits, including speed, objectivity, and thoroughness. They are utilized as an assistance system in various ways, such as highlighting relevant disease indicators to doctors, providing diagnosis suggestions, and presenting similar past cases for comparison. Methods: Much specifically, retinal AI-CADs have been developed to assist ophthalmologists in analyzing optical coherence tomography (OCT) images and making retinal diagnostics simpler and more accurate than before. Retinal AI-CAD technology could provide a new insight for the health care of humans who do not have access to a specialist doctor. AI-based classification methods are critical tools in developing improved retinal AI-CAD technology. The Isfahan AI-2023 challenge has organized a competition to provide objective formal evaluations of alternative tools in this area. In this study, we describe the challenge and those methods that had the most successful algorithms. Results: A dataset of OCT images, acquired from normal subjects, patients with diabetic macular edema, and patients with other macular disorders, was provided in a documented format. The dataset, including the labeled training set and unlabeled test set, was made accessible to the participants. The aim of this challenge was to maximize the performance measures for the test labels. Researchers tested their algorithms and competed for the best classification results. Conclusions: The competition is organized to evaluate the current AI-based classification methods in macular pathology detection. We received several submissions to our posted datasets that indicate the growing interest in AI-CAD technology. The results demonstrated that deep learning-based methods can learn essential features of pathologic images, but much care has to be taken in choosing and adapting appropriate models for imbalanced small datasets. © 2025 Journal of Medical Signals & Sensors.
Kenari, A.R.,
Montazerolghaem, A.,
Zojaji, Z.,
Ghatee, M.,
Yousefimehr, B.,
Rahmani, A.,
Kalani, M.,
Kiyanpour, F.,
Kiani-abari, M.,
Fakhar, M.Y. Journal Of Medical Signals And Sensors (22287477)15(2)
Background: Gastroesophageal reflux disease (GERD) is a prevalent digestive disorder that impacts millions of individuals globally. Multichannel intraluminal impedance-pH (MII-pH) monitoring represents a novel technique and currently stands as the gold standard for diagnosing GERD. Accurately characterizing reflux events from MII data are crucial for GERD diagnosis. Despite the initial introduction of clinical literature toward software advancements several years ago, the reliable extraction of reflux events from MII data continues to pose a significant challenge. Achieving success necessitates the seamless collaboration of two key components: a reflux definition criteria protocol established by gastrointestinal experts and a comprehensive analysis of MII data for reflux detection. Method: In an endeavor to address this challenge, our team assembled a dataset comprising 201 MII episodes. We meticulously crafted precise reflux episode definition criteria, establishing the gold standard and labels for MII data. Result: A variety of signal-analyzing methods should be explored. The first Isfahan Artificial Intelligence Competition in 2023 featured formal assessments of alternative methodologies across six distinct domains, including MII data evaluations. Discussion: This article outlines the datasets provided to participants and offers an overview of the competition results. © 2025 Journal of Medical Signals & Sensors.
In this paper, we present a deep learning approach for the detection of Distributed Denial of Service (DDoS) attacks within Software-Defined Networking (SDN) environments. The escalating threat of DDoS attacks poses significant challenges to SDN security, necessitating innovative detection methods. Our approach leverages a Multi-Layer Perceptron (MLP) model trained on a comprehensive SDN traffic dataset, exhibiting enhanced accuracy and efficiency compared to traditional machine learning algorithms. Integration with the Ryu controller facilitates real-time DDoS attack detection in live SDN environments, showcasing the practical applicability of deep learning in enhancing network security. We emphasize the creation of a robust SDN traffic dataset that enables rigorous evaluation and comparison of detection techniques, addressing a critical gap in current research. Through advancements in deep learning, our study underscores the importance of developing sophisticated security mechanisms to safeguard SDN architectures against evolving cyber threats. The effectiveness of our proposed method signifies a substantial contribution to the field, promoting the integrity and availability of network resources amidst increasing DDoS vulnerabilities. Our work not only highlights the technical prowess of deep learning in SDN security but also underscores the imperative for ongoing research to refine and optimize detection methods. By addressing key limitations and exploring hybrid approaches, we aim to fortify SDN networks against malicious activities, paving the way for robust and adaptive security solutions in dynamic network environments. © 2024 IEEE.
The increase of traffic in Internet-of multimedia Things networks leads to additional load on servers; therefore, this paper focuses on server load balancing in multimedia Internet-of-Things networks. Software-defined networking technology has been used to achieve load balancing in these networks, as software-defined networks with new features have improved load balancing in multimedia Internet-of-Things networks. In this study, the short-term and long-term recurrent neural network algorithm is used to predict the server load, and then a fuzzy system is used to accurately determine the server levels. Also, this article saves energy and also reduces server overhead. © 2024 IEEE.
Deploying multiple controllers in the control panel of software-defined networks increases scalability, but it brings challenges. Several articles have tried to solve this problem. But all the articles were either single-threshold, as a result of which there is no precise classification of controllers, or if they used a multi-level threshold, they considered the capacity of the controllers to be the same. In this article, the load balancing of controllers is discussed, which is used to accurately determine the status of controllers from multi-level thresholds, and switch migration operations are used to balance the load. In this operation, the criteria of the degree of load imbalance of the target controller and migration efficiency, which includes the level of the target controller after the switch migration operation and the distance between the migrant switch and the target controller, were used to select the suitable target controller and the migrant switch. The proposed scheme reduces the controller response time, migration cost, communication overhead, and throughput rate. The results show that our scheme has better comprehensive performance than the other schemes in terms of response time. © 2024 IEEE.
The Internet of Multimedia Things (IoMT) is an evolution of the IoT aimed at delivering multimedia streams as part of its realization. The IoMT is becoming increasingly appealing. Traditional computer network architectures are not robust enough to accommodate the rapid growth of IoMT networks. As a result, software-defined networks (SDNs) are utilized, which can centrally provide an overview of all network resources. SDNs enable advanced management by separating the control layer from the data layer. They introduce new capabilities to enhance load balancing. This research focuses on simultaneous load balancing between two key elements: servers and controllers. This approach allows us to prevent simultaneous issues in different network sections. Additionally, this approach employs long short-term memory prediction for forecasting the load on servers and controllers and a fuzzy system for distributing the load among servers and domains. Simulation results indicate that the proposed approach is highly effective in appropriately distributing load among servers in each network domain. The findings enable us to manage and optimize software-defined IoT networks more accurately using the proposed approach. This improves the quality of service provided to users and contributes to cost reduction and increased productivity. © 2024 IEEE.
Journal Of Engineering Research (23071877)
This article presents a comprehensive exploration of the architecture and various approaches in the domain of cloud computing and software-defined networks. The salient points addressed in this article encompass: Foundational Concepts: An overview of the foundational concepts and technologies of cloud computing, including software-defined cloud computing. Algorithm Evaluation: An introduction and evaluation of various algorithms aimed at enhancing network performance. These algorithms include Intelligent Rule-Based Metaheuristic Task Scheduling (IRMTS), reinforcement learning algorithms, task scheduling algorithms, and Priority-aware Semi-Greedy (PSG). Each of these algorithms contributes uniquely to optimizing Quality of Service (QoS) and data center efficiency. Resource Optimization: An introduction and examination of cloud network resource optimization based on presented results and practical experiments, including a comparison of the performance of different algorithms and approaches. Future Challenges: An investigation and presentation of challenges and future scenarios in the realm of cloud computing and software-defined networks. In conclusion, by introducing and analyzing simulators like Mininet and CloudSim, the article guides the reader in choosing the most suitable simulation tool for their project. Through its comprehensive analysis of the architecture, methodologies, and prevalent algorithms in cloud computing and software-defined networking, this article aids the reader in achieving a deeper understanding of the domain. Additionally, by presenting the findings and results of conducted research, it facilitates the discovery of the most effective and practical solutions for optimizing cloud network resources. © 2024 The Authors
Cheng, Y.,
Vijayaraj a., ,
Sree pokkuluri, K.,
Salehnia, T.,
Montazerolghaem, A.,
Rateb, R. IEEE Access (21693536)12pp. 139056-139075
Intelligent Transport Systems (ITS) are gradually progressing to practical application because of the rapid growth in network and information technology. Currently, the low-latency ITS requirements are hard to achieve in the conventional cloud-based Internet of Vehicles (IoV) infrastructure. In the context of IoV, Vehicular Fog Computing (VFC) has become recognized as an inventive and viable architecture that can effectively decrease the time required for the computation of diverse vehicular application activities. Vehicles receive rapid task execution services from VFC. The benefits of fog computing and vehicular cloud computing are combined in a novel concept called fog-based Vehicular Ad Hoc Networks (VANETs). These networks depend on a movable power source, so they have specific limitations. Cost-effective routing and load distribution in VANETs provide additional difficulties. In this work, a novel method is developed in vehicular applications to solve the difficulty of allocating limited fog resources and minimizing the service latency by using parked vehicles. Here, the improved heuristic algorithm called Revised Fitness-based Binary Battle Royale Optimizer (RF-BinBRO) is proposed to solve the problems of vehicular networks effectively. Additionally, the combination of Deep Adaptive Reinforcement Learning (DARL) and the improved BinBRO algorithm effectively analyzes resource allocation, vehicle parking, and movement status. Here, the parameters are tuned using the RF-BinBRO to achieve better transportation performance. To assess the performance of the proposed algorithm, simulations are carried out. The results defined that the developed VFC resource allocation model attains maximum service satisfaction compared to the traditional methods for resource allocation. © 2013 IEEE.
Salehnia, T.,
Miarnaeimi, F.,
Izadi, S.,
Ahmadi, M.,
Montazerolghaem, A.,
Mirjalili, S.,
Abualigah, L. pp. 625-651
Finding the threshold vector that gives the best performance of the image segmentation system is significant in Multi-level Thresholding Image Segmentation (MTIS) methods. Meta-Heuristic (MH) algorithms are among the techniques that can find reasonably good optimal thresholds and require reasonable computational resources. We use the combination model of the Whale Optimization Algorithm (WOA) and in conjunction with Moth-Flame Optimization (MFO) for MTIS. In MFWOA, the solutions during the exploitation phase are updated using the operators of WOA, and in the exploration phase, only the operators of MFO are used. The Inverse Otsu (IO) Function is used as Fitness Function for MFWOA. Experiments in image segmentation show that the proposed MFOWOA method is better than the compared algorithms in terms of accuracy as indicated by two performance measures: Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). It is also observed that the MFWOA algorithm is faster than WOA and slower than MFO in terms of execution time evaluation metric. In some cases, the proposed algorithm is faster than other algorithms. The results show demonstrate that the hybrid MFWOA algorithm solves MTIS problems better than both WOA and MFO algorithms and can obtain better thresholds that increase the performance of the MTIS system. © 2024 Elsevier Inc. All rights reserved.
Internet of Things (The Netherlands) (25426605)22
In the future, it is anticipated that software-defined networking (SDN) will become the preferred platform for deploying diverse networks. Compared to traditional networks, SDN separates the control and data planes for efficient domain-wide traffic routing and management. The controllers in the control plane are responsible for programming data plane forwarding devices, while the top layer, the application plane, enforces policies and programs the network. The different levels of the SDN use interfaces for communication. However, SDN faces challenges with traffic distribution, such as load imbalance, which can negatively affect the network performance. Consequently, developers have developed various SDN load-balancing solutions to enhance SDN effectiveness. In addition, researchers are considering the potential of implementing some artificial intelligence (AI) approaches into SDN to improve network resource usage and overall performance due to the fast growth of the AI field. This survey focuses on the following: Firstly, analyzing the SDN architecture and investigating the problem of load balancing in SDN. Secondly, categorizing AI-based load balancing methods and thoroughly assessing these mechanisms from various perspectives, such as the algorithm/technique employed, the tackled problem, and their strengths and weaknesses. Thirdly, summarizing the metrics utilized to measure the effectiveness of these techniques. Finally, identifying the trends and challenges of AI-based load balancing for future research. © 2023 Elsevier B.V.
IEEE Transactions on Intelligent Transportation Systems (15580016)24(12)pp. 14718-14731
Due to the rapid growth of the Internet of Vehicles (IoV) and the rise of multimedia services, IoV networks' servers and switches are facing resource crises. Multimedia vehicles connected to the Internet of Things are increasing; there are millions of vehicles and heavy multimedia traffic in the IoV network. The network's scarcity of resources results in overload, which, in turn, leads to a degradation of both Quality of Service (QoS) and Quality of Experience (QoE). Conversely, when resources are abundant, it leads to unnecessary energy wastage. Managing IoV network resources optimally while considering constraints such as Energy, Load, QoS, and QoE is a complex challenge. To address this, the study proposes a solution by decomposing the problem and designing a modular architecture named $\textit {ELQ}^{\vphantom {D^{j}}2}$. This architecture enables simultaneous control of the mentioned constraints, effectively reducing overall complexity. To achieve this objective, Network Softwarization and Virtualization concepts are employed. This modern architecture allows dynamically adjusting of the scale of the resources on demand, effectively reducing energy usage. Additionally, this architecture provides some other potentials, such as 'the distribution of multimedia traffic among servers', 'determining the route with high QoS for traffic', and 'selecting a media with high QoE'. A real test field is provided by Floodlight Controller, Open vSwitch, and Kamailio Server tools to evaluate the performance of ${ELQ}^{2}$. The findings suggest that the utilization of ${ELQ}^{2}$ holds promise in reducing the count of active servers and switches via effective resource management. Additionally, it demonstrates enhancements in various QoS and QoE parameters, encompassing throughput, multimedia delay, R Factor, and MOS, accomplished through load balancing strategies. As an illustration, the deployment of flows has achieved a commendable success rate of 95% owing to the utilization of SDN-based and comprehensive management practices encompassing all network resources. © 2000-2011 IEEE.
Concurrency and Computation: Practice and Experience (15320626)35(26)
Nowadays data center networks (DCNs) should handle the ever-growing load generated by diverse applications. It particularly occurs under concurrent flow requests and so-called mice flows (MFs): elephant flows (EFs) ratio. Concentrating on a binary vision over flow size classification (EF or MF) results in subsequent unpredicted load imbalance due to neglecting EF's distinctions including a wide range of sizes. As a result, some EFs might utilize a path owing qualities beyond the given EF's demands, while another EF with higher requirements is attending to use an over-utilized path. This article proposes FMap, a fuzzy map for scheduling EFs through our proposed variant of traveling salesperson problem (TSP) toward DCNs. FMap represents a novel EF scheduling scheme that integrates flow prioritization and routing decisions under the event pf parallel incoming flows besides the cooperation of the controller and OpenFlow switches in software-defined networking (SDN) paradigm. FMap adopts fuzzy inference process to overcome the vagueness over EF's resource allocation. Mainly, FMap proposes a new variant of TSP (optimized by genetic algorithm) which enables EF's group forwarding with a minimum cost. FMap reduces the total hop count of EFs through considering a single optimal path for delivering groups of EFs that contain a same tag (priority). The outstanding results represent a major improvement as compared with equal cost multiple path, Hedera, Sonoum, and Size-KP-PSO. Particularly, the results illustrate an outperforming by 3.76×, 0.21×, 0.15×, 0.03×, and 0.03× in terms of total hop count, EFs FCT, packet loss, goodput, and received packets as compared with Size-KP-PSO, respectively. © 2023 John Wiley & Sons, Ltd.
Journal of Ambient Intelligence and Humanized Computing (18685145)14(9)pp. 12981-13001
Today, the multimedia over IP (MoIP) network has become a cost-effective and efficient alternative to the public switched telephone network (PSTN). Free applications for multimedia transmission over the Internet have become increasingly popular, gaining considerable popularity around the world. This communication consists of two phases, i.e., signaling phase and media exchange phase. The SIP protocol is responsible for the MoIP network signaling to provide services such as VoIP, voice and video conferencing, video over demand (VoD), and instant messaging. This application layer protocol has been standardized by the IETF for initiating, managing, and tearing down multimedia sessions and has been widely used as the main signaling protocol on the Internet. The signaling and media are handled by SIP proxies and network switches, respectively. One of the most critical challenges in MoIP is the overloading of SIP proxies and network switches. Because of these challenges, a wide range of network users experiences a sharp drop in service quality. Overload occurs when there are not enough processing resources and memory to process all the messages due to the lack of proper routing. This study aims to model the routing problem in MoIP by providing a framework based on software-defined networking (SDN) technology and a convex mathematical programming model to prevent overload. The proposed framework is simulated and implemented using various scenarios and network topologies. The results show that throughput, latency, message retransmission rate, and resource consumption have improved using the proposed approach. © 2022, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
By separating the network management plane from the data plane, Software-Defined Networking (SDN) is a networking architecture that offers a centralized view of the network. But due to the growth of the network, there must be several controllers. The number of them and their location create the issue of the controller placement problem (CPP). In many studies conducted in this field, it has been assumed that network information is fully available. In practice, due to dynamic conditions, the obtained information has uncertainty. In facing this challenge of uncertainty, there are different approaches, but one effective approach has been robust optimization techniques. In robust optimization, the goal is to make a feasible and optimal decision to optimize the objective function in the worst case. Robust methods are also different, and we use the Bertsimas method. In this article, the uncertainty of the traffic rate parameters produced by each switch as well as the capacity or processing power parameter of each controller will be the problem. Finally, we will show the advantage of the robust method in a scenario. In such a way that if there is a prediction for the deviations that happen during the execution of the network, the actual value of the objective function will be equal or even reduced in some cases. © 2023 IEEE.
Fog computing (FC) is an emerging paradigm developed to increase the speed of data processing in the Internet of Things (IoT) system. In such environment, an optimal Task Scheduling (TSch) of IoT devices requests can improve the performance and performance of IoT systems. This chapter introduces a task requests scheduling method in an IoT-Fog System (IoTFS) based on Software Defined Networks (SDN) for two reasons. The first reason is the fully flexible infrastructure virtualization that uses the IoT-Fog network's TSch capabilities to work on an active platform. The second reason is to reduce latency for IoT devices. An SDN-IoT-Fog computing model is proposed, which reduces network latency and traffic overhead by using a centralized IoTFS controller and coordinating network elements in the SDN controller layer. So, a hybrid Meta-Heuristic (MH) algorithm using the combination of Aquila Optimizer (AO) and Whale Optimization Algorithm (WOA), which is called AWOA, is proposed to schedule IoT task requests and allocate FC resources to IoT task requests to reduce the task completion time of IoT devices. The purpose of the proposed SDN-based AWOA method is to optimize task Execution Time (ET), Makespan Time (MT), and Throughput Time (TT), which are investigated in this chapter to the Quality of Services (QoSs). Experiments show that the proposed SDN-based AWOA is stronger than the compared algorithms for different Evaluation Metrics (EMs). © 2024 Elsevier Inc. All rights reserved.
In this work, we are going to find a way to estimate content popularity in clustered networks. Since software-defined methods make planning more automatic and flexible, in this paper we take the help of software-defined methods for content estimation. For this purpose, the content popularity estimation algorithm is implemented by the base station located in the control plane. At the data plane level, there are users and among the users, there are several helper users who use the cache content in their device both to benefit other users and reduce base station traffic. First, we cluster the helpers using clustering methods. Then we present an optimal algorithm to estimate content popularity in each cluster, and finally, in a cellular network with the ability to communicate using a D2D connection we use cache placement to maximize offload traffic. Evaluations show that cache placement based on the proposed content popularity estimation algorithm increases the hit rate and offload. © 2023 IEEE.
Today, the Internet of Things (IoT) use to collect data by sensors, and store and process them. As the IoT has limited processing and computing power, we are turning to integration of cloud and IoT. Cloud computing processes large data at high speed, but sending this large data requires a lot of bandwidth. Therefore, we use fog computing, which is close to IoT devices. In this case, the delay is reduced. Both cloud and fog computing are used to increasing performance of IoT. Job scheduling of IoT workflow requests based on cloud-fog computing plays a key role in responding to these requests. Job scheduling in order to reduce makespan time, is very important in realtime system. Also, one way to improve system performance is to reduce energy consumption. In this article, three-objective Harris Hawks Optimizer (HHO) scheduling algorithm is proposed in order to reduce makespan time, energy consumption and increase reliability. Also, dynamic voltage frequency scaling (DVFS) has been used to reduce energy consumption, which reduces frequency of the processor. Then HHO is compared with other algorithms such as Whale Optimization Algorithm (WOA), Firefly Algorithm (FA) and Particle Swarm Optimization (PSO) and proposed algorithm shows better performance on experimental data. The proposed method has achieved an average reliability of 83%, energy consumption of 14.95 KJ, and makespan of 272.5 seconds. © 2022 IEEE.
IEEE Transactions on Smart Grid (19493053)13(3)pp. 1952-1966
With an increase in the utilization of appliances, meeting the energy demand of consumers by traditional power grids is an important issue. The success of Demand Response (DR) depends conclusively on real-time data communication between the consumers and the suppliers. Hence, a scalable and programmable communication network is required to handle the data generated. We prove that the problem of DR global load balancing includes energy and data constraints is NP-hard. So, a dynamic and self-configurable network technology known as Software-defined Networking (SDN) can be an efficient solution. In order to handle DR communication challenges, an SDN-enabled framework for DR flow management is designed in this paper. This framework is based on two-tier cloud computing and manages energy and data traffic seamlessly. We also equip this framework with Network Functions Virtualization (NFV) technology. The proposed framework is implemented on a practical testbed, which includes Open vSwitch, Floodlight controller, and OpenStack. Its performance is appraised by comprehensive experiments and scenarios. Based on the results, it achieves low delay, a high throughput, and improves Peak to Average Ratio (PAR) by balancing the energy and data on the entire DR network. © 2010-2012 IEEE.
IEEE Internet of Things Journal (23274662)9(3)pp. 2432-2442
Internet of Multimedia Things (IoMT) is becoming attractive day by day and provides more services to the Internet users. The ever-increasing multimedia applications and services led to an outburst in IoMT. The multimedia things connected to IoMT are also increasing; millions of devices and high volume of traffic. This large traffic is directed toward the servers of the service provider in the IoMT cloud through the network switches. As a result, the IoMT infrastructure is facing the crisis of the resource management of switches and servers from two aspects: 1) load imbalance and 2) energy loss. This article proves that the problem of optimal resource management of IoMT networks with the energy and load constraints simultaneously is an NP-hard problem that has a high time complexity. The problem has decomposed. Then, we propose a modular system of energy and load control in IoMT using the concepts of network softwarization and virtual resources. The proposed controller first dynamically adjusts the resources through accurate determination of the IoMT network size. It then distributes the load between the IoMT servers as well as routes the traffic between switches to the desired server. The Open vSwitch, Floodlight Controller, and Kaa Servers are, respectively, used for implementing the switches, controller, and servers of IoMT in the test platform. The results show that the proposed system both minimizes the number of IoMT active servers and switches and distributes the load between them. As a result, the parameters for evaluating the quality of service and quality of experience, such as throughput, multimedia delay, R factor, and mean opinion score improved. © 2021 IEEE.
Journal of Supercomputing (15730484)78(12)pp. 14471-14503
Nowadays, voice over IP (VoIP) is a cost-effective and efficient technology in the communications industry. Free applications for transferring multimedia on the Internet are becoming more attractive and pervasive day by day. Nevertheless, the traditional, close, and hardware-defined nature of the VoIP networks’ structure makes the management of these networks more complicated and costly. Besides, its elementary and straightforward mechanisms for routing call requests have lost their efficiency, causing some problems, such as SIP servers’ overload. In order to tackle these problems, we introduce VoIP network softwarization and virtualization and propose two novel frameworks in this article. In this regard, we take advantage of the SDN and NFV concepts such that we separate data and control planes and provide the possibility for centralized and softwarized control of this network. This matter leads to effective routing. The NFV also makes this network’s dynamic resource management possible by functions virtualization of the VoIP network. The proposed frameworks are implemented in a real testbed, including Open vSwitch and Floodlight, examined by various scenarios. The results demonstrate an improvement in signaling and media quality in the VoIP network. As an example, the average throughput and resource efficiency increased by at least 28% and the average response time decreased by 34%. The overall latency has also been reduced by almost 39%. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
IEEE Transactions on Network and Service Management (19324537)18(3)pp. 2902-2914
With the rapid growth of different massive applications and parallel flow requests in Data Center Networks (DCNs), today's providers are confronting challenges in flow forwarding decisions. Since Software Defined Networking (SDN) provides fine granular control, it can be intelligently programmed to distinguish between flow requirements. The present article proposes a knapsack model in which the link bandwidth and incoming flows are modeled as a knapsack capacity and items, respectively. Furthermore, each flow consists of two size and value aspects, acquired through flow size extraction and the type of service value assigned by the SDN controller decision. Indeed, the current work splits the incoming flow size range into Type of Service (ToS) decimal value numbers. The lower the flow size category, the higher the value dedicated to the flow. Particle Swarm Optimization (PSO) optimizes the knapsack problem and first forwards the selected-flows by KP-PSO, and the non-selected-flows second. To address the shortcomings of these methods in the event of dense parallel flow detection, the present study puts the link under the threshold of a 70% load by simultaneous requests. Experimental results indicate that the proposed method outperforms Sonum, Hedera, and ECMP in terms of flow completion time, packet loss rate, and goodput regarding flow size requirements. © 2004-2012 IEEE.
Concurrency and Computation: Practice and Experience (15320626)33(21)
Nowadays, the Voice Over IP (VoIP) technology is an important component of the communications industry as well as a low-cost alternative to Public Switched Telephone Networks. Communication in VoIP networks consists of two main phases, for example, signaling and media exchange. VoIP servers are responsible for signaling exchange using the Session Initiation Protocol (SIP) as the signaling protocol. The saturation of SIP server resources is one of the issues with the VoIP network, which causes problems such as overload or loss of energy. Resource saturation occurs mainly due to a lack of integrated server resource management. In the traditional VoIP networks, management and routing are distributed among all equipment, including servers. These servers are overloaded during peak times and face energy loss during idle times. Given the importance of this issue, this paper introduces a framework based on Software-Defined Networking technology for SIP server resource management. The advantage of this framework is to have a global view of all the server resources. In this framework, the resource allocation optimization problem and resource autoscaling are presented to deal with the problems posed. The goal is to maximize total throughput and minimize energy consumption. In this regard, we seek to strike a balance between efficiency and energy. The proposed framework is implemented in the actual testbed. The results show that the proposed framework has succeeded in achieving these goals. © 2021 John Wiley & Sons Ltd.
Cluster Computing (13867857)24(2)pp. 591-610
Data centers are growing densely and providing various services to millions of users through a collection of limited servers. That’s why large-scale data center servers are threatened by the overload phenomenon. In this paper, we propose a framework for data centers that are based on Software-defined networking (SDN) technology and, taking advantage of this technology, seek to balance the load between servers and prevent overloading on a given server. In addition, this framework provides the required services in a fast time and with low computational complexity. The proposed framework is implemented in a real testbed, and a wide variety of experimentations are carried out in comprehensive scenarios to evaluate its performance. Furthermore, the framework is evaluated with four data centers including Three-layer, Fat-Tree, BCube, and Dcell data centers. In the testbed, Open vSwitch v2.4.1 and Floodlight v1.2 are used to implement switches and OpenFlow controllers. The results show that in all four SDN-based architectures, the load balances between the servers is well maintained, and a significant improvement has been made in parameters such as throughput, delay, and resource consumption. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.
International Journal of Communication Systems (10991131)33(3)
Due to the limited energy of sensor nodes in wireless sensor networks, extending the network lifetime is a major challenge that can be formulated as an optimization problem. In this paper, we propose a distributed iterative algorithm based on alternating direction method of multipliers with the aim of maximizing sensor network lifetime. The features of this algorithm are the use of local information, low overhead of message passing, low computational complexity, fast convergence, and, consequently, reduced energy consumption. In this study, we present the convergence results and the number of iterations required to achieve the stopping criterion. Furthermore, the impact of problem size (number of sensor nodes) on the solution and constraints violation is studied, and, finally, the proposed algorithm is compared with one of the well-known subgradient-based algorithms. © 2019 John Wiley & Sons, Ltd.
IEEE Transactions on Green Communications and Networking (24732400)4(3)pp. 873-889
The rapid growth of communications and multimedia network services such as Voice over Internet Protocol (VoIP) have caused these networks to face a crisis in resources from two perspectives: 1. Lack of resources and, as a result, overload; 2. Redundancy of resources and, as a result, energy loss. Cloud computing allows the scale of resources to be reduced or increased on demand. Many of the gains obtained from the cloud computing come from resource sharing and virtualization technology. On the other hand, the emerging concept of Software-Defined Networking (SDN) can provide a global view of the entire network for integrated resource management. Network Function Virtualization (NFV) can also be used to virtually implement a variety of network devices and functions. In this paper, we present an energy-efficient framework called GreenVoIP to manage the resources of virtualized cloud VoIP centers. By managing the number of VoIP servers and network equipment, such as switches, this framework not only prevents overload but also supports green computing by saving energy. Finally, GreenVoIP is implemented and evaluated on real platforms, including Floodlight, Open vSwitch, and Kamailio. The results suggest that the proposed framework can minimize the number of active devices, prevent overloading, and provide service quality requirements. © 2017 IEEE.
IEEE Internet of Things Journal (23274662)7(4)pp. 3323-3337
Internet of Things (IoT) offers a variety of solutions to control industrial environments. The new generation of IoT consists of millions of machines generating huge traffic volumes; this challenges the network in achieving the Quality-of-Service (QoS) and avoiding overload. Diverse classes of applications in IoT are subject to specific QoS treatments. In addition, traffic should be distributed among IoT servers based on their available capacity. In this article, we propose a novel framework based on software-defined networking (SDN) to fulfill the QoS requirements of various IoT services and to balance traffic between IoT servers simultaneously. At first, the problem is formulated as an integer linear programming (ILP) model that is NP-hard. Then, a predictive and proactive heuristic mechanism based on time-series analysis and fuzzy logic is proposed. Afterward, the proposed framework is implemented in a real testbed, which consists of the Open vSwitch, Floodlight controller, and Kaa servers. To evaluate the performance, various experiments are conducted under different scenarios. The results indicate the improved IoT QoS parameters, including throughput and delay, and illustrate the nonoccurrence of overload on IoT servers in heavy traffic. Furthermore, the results show improved performance compared to similar methods. © 2014 IEEE.
Telecommunication Systems (10184864)67(2)pp. 309-322
The extent and diversity of systems, provided by IP networks, have made various technologies approach integrating different types of access networks and convert to the next generation network (NGN). The session initiation protocol (SIP) with respect to facilities such as being in text form, end-to-end connection, independence from the type of transmitted data, and support various forms of transmission, is an appropriate choice for signalling protocol in order to make connection between two IP network users. These advantages have made SIP be considered as a signalling protocol in IP multimedia subsystem (IMS), a proposed signalling platform for NGNs. Despite having all these advantages, SIP protocol lacks appropriate mechanism for addressing overload causing serious problems for SIP servers. SIP overload occurs when a SIP server does not have enough resources to process messages. The fact is that the performance of SIP servers is largely degraded during overload periods because of the retransmission mechanism of SIP. In this paper, we propose an advanced mechanism, which is an improved method of the windows based overload control in RFC 6357. In the windows based overload control method, the window is used to limit the amount of message generated by SIP proxy server. A distributed adaptive window-based overload control algorithm, which does not use explicit feedback from the downstream server, is proposed. The number of confirmation messages is used as a measure of the downstream server load. Thus, the proposed algorithm does not impose any additional complexity or processing on the downstream server, which is overloaded, making it a robust approach. Our proposed algorithm is developed and implemented based on an open source proxy. The results of evaluation show that proposed method could maintain the throughput close to the theoretical throughput, practically and fairly. As we know, this is the only SIP overload control mechanism, which is implemented on a real platform without using explicit feedback. © 2017, Springer Science+Business Media New York.
Wireless sensor networks (WSNs) have the potential for realization economical automation systems in a smart grid where the different type of sensors mote can be used to monitor a wide range of the smart grid environment's parameters. Energy restriction of wireless sensor nodes and consequently lifetime of the network is a real challenge in WSNs applications like the smart grid. The WSN lifetime can be formulated as an optimization problem. In this paper, the alternating direction method of multipliers (ADMM) algorithm is used to implement a novel distributed iterative algorithm for the problem of extending the sensor network lifetime. The proposed algorithm has some striking feature that including use of local information, low overhead of message passing, low computational complexity, fast convergence, and reduced energy consumption. The experiment results related to the convergence and number of iterations required to achieve the stopping criterion presented. As well as, the results of proposed algorithm compared with the subgradient methods. In comparison, the proposed ADMM-based algorithm outperforms the other methods. © 2018 IEEE.
IEEE Internet of Things Journal (23274662)5(1)pp. 206-218
The advanced metering infrastructure (AMI) is one of the main services of smart grid (SG), which collects data from smart meters (SMs) and sends them to utility company meter data management systems (MDMSs) via a communication network. In the next generation AMI, both the number of SMs and the meter sampling frequency will dramatically increase, thus creating a huge traffic load which should be efficiently routed and balanced across the communication network and MDMSs. This paper initially formulates the global load-balanced routing problem in the AMI communication network as an integer linear programming model, which is NP-hard. Then, to overcome this drawback, it is decomposed into two subproblems and a novel software defined network-based AMI communication network is proposed called OpenAMI. This paper also extends the OpenAMI for the cloud computing environment in which some virtual MDMSs are available. OpenAMI is implemented on a real test bed, which includes Open vSwitch, Floodlight controller, and OpenStack, and its performance is evaluated by extensive experiments and scenarios. Based on the results, OpenAMI achieves low end-to-end delay and a high delivery ratio by balancing the load on the entire AMI network. © 2014 IEEE.
IEEE Transactions on Network and Service Management (19324537)15(1)pp. 184-199
VoIP is becoming a low-priced and efficient replacement for PSTN in communication industries. With a widely growing adoption rate, session initiation protocol (SIP) is an application layer signaling protocol, standardized by the IETF, for creating, modifying, and terminating VoIP sessions. Generally speaking, SIP routes a call request to its destination by using SIP proxies. With the increasing use of SIP, traditional configurations pose certain drawbacks, such as ineffective routing, un-optimized management of proxy resources (including CPU and memory), and overload conditions. This paper presents OpenSIP to upgrade the SIP network framework with emerging technologies, such as software-defined networking (SDN) and network function virtualization (NFV). SDN provides for management that decouples the data and control planes along with a software-based centralized control that results in effective routing and resource management. Moreover, NFV assists SDN by virtualizing various network devices and functions. However, current SDN elements limit the inspected fields to layer 2-4 headers, whereas SIP routing information resides in the layer-7 header. A benefit of OpenSIP is that it enforces policies on SIP networking that are agnostic to higher layers with the aid of a deep packet inspection engine. Among the benefits of OpenSIP is programmability, cost reduction, unified management, routing, as well as efficient load balancing. This paper implements OpenSIP on a real testbed which includes Open vSwitch and the Floodlight controller. The results show that the proposed architecture has a low overhead and satisfactory performance and, in addition, can take advantage of a flexible scale-out design during application deployment. © 2004-2012 IEEE.
International Journal of Communication Systems (10991131)30(3)
The widespread use of Session Initiation Protocol as a signalling protocol has created various challenges. An important one is that its throughput can be severely degraded when an overload happens in the proxy server because of several retransmissions from the user agent. One common approach to overcome this problem is ‘load balancing’. A balancer needs to know the status of proxy servers, which are continuously gathered implicitly or explicitly. Implicit methods have averagely less overhead than explicit ones. This paper attempts to prevent throughput reduction by balancing the loads among available proxy servers properly using an implicit mechanism called History Weighted Average Response time. The proposed algorithm is robust because it incurs no extra processing to proxy servers. The novelty of the mechanism is making use of ‘response time history’ to estimate the load being currently processed on servers. By implementing in a real testbed, throughput and scalability are improved compared with an important state-of-the-art similar algorithm. This improvement stems from no need for modification in SIP protocol, easy implementation and application, simple computations for making decision and no need for extra feedback between servers and load balancer. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Network Functions Virtualization (NFV) is a promising network architecture where network functions are virtualized and decoupled from proprietary hardware. In modern datacenters, user network traffic requires a set of Virtual Network Functions (VNFs) as a service chain to process traffic demands. Traffic fluctuations in Large-scale DataCenters (LDCs) could result in overload and underload phenomena in service chains. In this paper, we propose a distributed approach based on Alternating Direction Method of Multipliers (ADMM) to jointly load balance the traffic and horizontally scale up and down VNFs in LDCs with minimum deployment and forwarding costs. Initially we formulate the targeted optimization problem as a Mixed Integer Linear Programming (MILP) model, which is NP-complete. Secondly, we relax it into two Linear Programming (LP) models to cope with over and underloaded service chains. In the case of small or medium size datacenters, LP models could be run in a central fashion with a low time complexity. However, in LDCs, increasing the number of LP variables results in additional time consumption in the central algorithm. To mitigate this, our study proposes a distributed approach based on ADMM. The effectiveness of the proposed mechanism is validated in different scenarios. © 2017 IEEE.
IEEE Transactions on Network and Service Management (19324537)13(4)pp. 806-822
Network functions virtualization provides opportunities to design, deploy, and manage networking services. It utilizes cloud computing virtualization services that run on high-volume servers, switches, and storage hardware to virtualize network functions. Virtualization techniques can be used in IP multimedia subsystem (IMS) cloud computing to develop different networking functions (e.g., load balancing and call admission control). IMS network signaling happens through session initiation protocol (SIP). An open issue is the control of overload that occurs when an SIP server lacks sufficient CPU and memory resources to process all messages. This paper proposes a virtual load balanced call admission controller (VLB-CAC) for the cloud-hosted SIP servers. VLB-CAC determines the optimal "call admission rates" and "signaling paths" for admitted calls along with the optimal allocation of CPU and memory resources of the SIP servers. This optimal solution is derived through a new linear programming model. This model requires some critical information of SIP servers as input. Further, VLB-CAC is equipped with an autoscaler to overcome resource limitations. The proposed scheme is implemented in smart applications on virtual infrastructure (SAVI) which serves as a virtual testbed. An assessment of the numerical and experimental results demonstrates the efficiency of the proposed work. © 2016 IEEE.
Transactions on Emerging Telecommunications Technologies (21613915)27(6)pp. 857-873
Session initiation protocol (SIP) is an application layer signalling protocol for the set-up, management and termination of multimedia networks such as voice over IP standardised by Internet Engineering Task Force. This protocol has also been recognised by International Telecommunication Union as a main core of next-generation networks. When this network is under overload for reasons such as improper design, instantaneous swarm, components error and sudden reduction of processing capacity, its efficiency is considerably reduced. Overload occurs when the SIP proxy lacks sufficient CPU and memory resources to process all messages. Because the overload cannot be prevented completely, it is important to equip SIP proxies with an effective overload mitigation mechanism. In this paper, capabilities of transmission control protocol are used in transport layer to reduce SIP proxy overload through the proper allocation of proxy resources. Major activities in this field assume SIP on user datagram protocol, which practically does not result in optimal throughput. To evaluate this approach, an Asterisk open source proxy is used. Our implementation results in the real testbed indicate the efficiency improvement of Asterisk proxy under overload. Copyright © 2016 John Wiley & Sons, Ltd.
Proceedings - IEEE Global Communications Conference, GLOBECOM (25766813)
The Session Initiation Protocol (SIP) is an application-layer control protocol for creating, modifying and terminating multimedia sessions. An open issue is the control of overload that occurs when a SIP server lacks sufficient CPU and memory resources to process all messages. We prove that the problem of overload control in SIP network with a set of n servers and limited resources is in the form of NP-hard. This paper proposes a Load-Balanced Call Admission Controller (LB-CAC), based on a heuristic mathematical model to determine an optimal resource allocation in such a way that maximizes call admission rates regarding the limited resources of the SIP servers. LB-CAC determines the optimal »call admission rates» and »signaling paths» for admitted calls along optimal allocation of CPU and memory resources of the SIP servers through a new linear programming model. This happens by acquiring some critical information of SIP servers. An assessment of the numerical and experimental results demonstrates the efficiency of the proposed method. © 2015 IEEE.
Widespread use of SIP as a signalling protocol in VoIP networks is the main reason for tackling various challenges. SIP throughput can severely be degraded when an overload situation happens in the proxy servers due to several retransmissions from user agents. In this paper we try to prevent throughput reduction by properly distributing the loads over available proxy servers. The proposed scheme utilizes response time of the servers as the main decision factor. The algorithm is implemented in a real environment using Spirent and Asterisk servers as call generator and load balancer respectively. The results of comparing the proposed method with some well-known algorithms indicate considerable throughput improvement up to 15% with a Round-Robin algorithm. © 2014 IEEE.
International Conference on Electrical Engineering, Computer Science and Informatics (EECSI) (2407439X)1pp. 46-51
To start voice, image, instant messaging, and generally multimedia communication, session communication must begin between two participants. SIP (session initiation protocol) that is an application layer control induces management and terminates this kind of sessions. As far as the independence of SIP from transport layer protocols is concerned, SIP messages can be transferred on a variety of transport layer protocols including TCP or UDP. Mechanism of Retransmission that is embedded in SIP could compensate for the missing packet loss, in case of need. This mechanism is applied when SIP messages are transmitted on an unreliable transmission layer protocol like UDP. Also, while facing SIP proxy with overload, it could cause excessive filling of proxy queue, postpone increase of other contacts, and add to the amount of the proxy overload. In the present work, while using UDP as transport layer protocol, invite retransmission timer (T1) was appropriately regulated and SIP functionality was improved. Therefore, by proposing an adaptive timer of invite message retransmission, attempts were made to improve the time of session initiation and consequently improve the performance. Performance of the proposed SIP was implemented and evaluated by SIPP software in a real network environment and its accuracy and performance were demonstrated. © 2014, Institute of Advanced Engineering and Science. All rights reserved.
Requests in SIP (Session Initiation Protocol) usually pass through several servers. Since resources are considerably consumed to keep the state, state distribution problem is raised among several servers where the goal is to increase throughput, accessibility of servers and reduce overload. In this paper, the optimisation problem was formulated and implemented as a distributed algorithm on Asterisk proxy server which is a commercial proxy server and evaluated with Spirent traffic production device. It results in a more scalable SIP server which dynamically determines the number of SIP requests for which server is stateful and assigns state management to the server which is inferior to it for the remaining demands. The current SIP servers are statically configured such that they are stateless or stateful and don't give optimal efficiency. © 2014 IEEE.
The extent and diversity of systems provided by IP networks have lead various technologies to approach integrating various types of access networks and converting to next generation network. On account of features as being in text form, end-to-end connection, independence from the type of transmitted data, and supporting various forms of transmission, is an appropriate choice for signaling protocol in order to make connection between two IP network users. These advantages have made SIP be considered as a signaling protocol in IMS, a proposed signaling platform for next generation networks. Despite having all these advantages, SIP protocol is in lacking of appropriate mechanism for addressing overload. In this paper we try to improve a window-based overload control in RFC 6537. In window-based overload control method, a window is used to limit the number of messages that are sent to an overloaded SIP proxy, simultaneously. In this paper we first use fuzzy logic to regulate the accurate size of window and then we develop, implement, and evaluate it on an Asterisk open-source proxy. Simulation results show that this method could maintain throughput under overload conditions practically, change the maximum window size dynamically. © 2013 IEEE.
The extent and diversity of systems provided by IP networks have lead various technologies to approach integrating various types of access networks and converting to next generation network. On account of features as being in text form, end-to-end connection, independence from the type of transmitted data, and supporting various forms of transmission, is an appropriate choice for signaling protocol in order to make connection between two IP network users. These advantages have made SIP be considered as a signaling protocol in IMS, a proposed signaling platform for next generation networks. Despite having all these advantages, SIP protocol is in lacking of appropriate mechanism for addressing overload. In this paper the window-based overload control mechanism which does not require explicit feedback is developed and implemented on Asterisk open source proxy and evaluated. The results of implementation show that this method could practically maintain throughput in case of overload. As we know this is the only overload control method which is implemented on a real platform without using explicit feedback. The results show that the under load server maintains its throughput at the maximum capacity. © 2013 IEEE.
Scientific Research and Essays (19922248)6(11)pp. 2366-2371
With the increasing growth of population and development of industries, huge amount of sums are spent to increase the production of electricity and installation of new units. In this paper, a new method has been presented which reduces the aforementioned costs and prevents the losses in national capitals and fuel sources. This method finds the optimal location of Thyristor controlled series capacitor (TCSC) in transmission network by using genetic algorithm. Due to the use of fossil fuels in power plants, the application of this method is necessary in countries with the high cost of electricity. The effectiveness of the proposed method has been illustrated in a 10-bus network as a case study. © 2011 Academic Journals.