filter by: Publication Year
(Descending) Articles
Sustainable Computing: Informatics and Systems (22105379) 46
To reduce latency and save energy, cloudlet computing enables tasks to be offloaded from user equipment to Cloudlet Servers (CSs). Determining the optimal number of CSs and the appropriate locations for their placement are two major challenges in building an efficient computing platform. Placing a CS at the closest location to the user can improve the QoS. Additionally, providing additional CSs to cover each user ensures that the user's needs are met even if the designated server is unable to provide services. However, to minimize energy consumption and costs, service providers tend to use a minimum number of CSs. Since the coverage zones of different CSs may overlap, fewer additional servers need to be deployed in such areas. This paper examines the problem of CS placement in a Wireless Metropolitan Area Network (WMAN) and introduces a three-objective model that aims to optimize transmission distance, coverage with overlap control, and energy consumption. To obtain an appropriate Pareto front, the performance of the NSGA-II, binary MOPSO, and binary MOGWO algorithms is examined through four different scenarios under the Shanghai Telecom dataset. Comparing the results of the Hyper-Volume (HV) indicator reveals that the NSGA-II algorithm has higher values in all studied scenarios. A higher HV value means that the solution set is closer to an optimal Pareto set. In the best and worst case, the HV values for the NSGA-II were equal to 0.2275 and 0.1883, respectively. © 2025 Elsevier Inc.
Computing (14365057) 107(6)
Consolidating the Internet of Things (IoT) and Software Defined Networks (SDN) has been a great concern among researchers. In IoT, Wireless Sensor Network (WSN) is important communication component. Due to the large volume of data generated in IoT and the limitations of WSN, load distribution in these networks is a serious challenge. The Base Station (BS) in these networks may experience a lot of delay due to high processing. Also, data of applications in these networks must inform to users with the lowest delay. Therefore, in addition to load distribution, reducing response time is also one of the factors that must be considered. Therefore, load distribution among BS’s seems to be crucial Considering several BSs and their relationships with the network nodes and preventing them from overloading through load distribution may solve the problem. This can be implemented by applying the common nodes that belong to several BSs. That is, to reduce the load of the overloaded BS, the common node in the cluster corresponding to BS and the other BSs is selected to send common node load to the other BSs. The common node is selected through the load-balancing node-finder algorithm. Load transfer is done through the forwarding node, which is specified in the proposed routing process. The Colored Petri Nets (CPN) s are applied to implement the proposed method. Here, queue length, residual energy, nodes, BSs, and delay time are simulated in three scenarios. The results show that most of the nodes are applied in the proposed algorithm to implement load balancing between the nodes and BSs. The results show the proposed SDN-based algorithm reduces the residual energy to 18%, the queue length to 9.5%, and the delay to 35%. © The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature 2025.
Physical Communication (18744907) 66
With the growing demand for Internet of Things (IoT) applications, supporting massive access to the media is a necessary requirement in 5G cellular networks. Accommodating the stringent requirements of Ultra-Reliable Low Latency Communications (URLLC) is a challenge in massive access to the medium. The random-access procedure is of the most challenging issues in massive IoT (mIoT) networks with URLL requirements as a high number of channel access requests result in high channel access latency or low reliability. In previous works, some solutions have been proposed to solve this challenge including grant-free access, priority-based access, and grouping nodes to restrict random access requests to groups’ leaders. Particularly, previous idea that is based on grouping, clusters the devices with similar reaction against an event to a group, which is not always applicable for various IoT applications. This research proposes a novel device grouping to improve the random-access procedure of mIoT devices with URLLC requirements. In the proposed method, device grouping is accomplished based on the analysis of devices’ traffic. A similarity index is used to obtain the similarity of time series made from historical traffic patterns of devices and then, an innovative algorithm is proposed to group the devices based on this index. Grouping devices based on similar traffic patterns, provides access to the media with less complexity and more efficiency for a large number of devices. Performance of the proposed approach is evaluated using simulations and real traffic dataset. The evaluation results show higher suitability of proposed method compared to the baseline mechanism of LTE and the previous method in terms of access failures (which affects delay and reliability) and energy consumption. For a usual setting, the channel access failure decreases by about 94 % compared to the previous method and by 0.88 % compared to LTE. The energy consumption also improves by about 1.8 % compared to LTE and by 1.2 % compared to previous method. Moreover, the results show that the proposed method is appropriate for IoT applications with regular traffic patterns. © 2024 Elsevier B.V.
Cluster Computing (13867857) 27(4)pp. 4537-4550
Connected devices in IoT continuously generate monitoring and measurement data to be delivered to application servers or end-users. Transmitting IoT data through networks would lead to congestion and long delays. NDN is an emerging network paradigm based on name-identified data known to be an appropriate architecture for supporting IoT networks. In-network caching is one of the main advantages of NDN, a major issue discussed in many studies. One of the significant challenges for some IoT data is the transiency nature, making the data caching mechanism different. IoT data such as ambient monitoring in urban areas and tracking current traffic conditions are often transient, which means these data have a limited lifetime and then expire. In the proposed approach, data placement is decided upon based on the data lifetime and node position. Data lifetime is an essential property that must be involved in caching methods; consequently, the data are classified based on the data lifetime, and specific nodes are selected for caching according to defined classes and nodes’ positions in topology. Based on the proposed scheme, the nodes with the highest outgoing interface count or the edge nodes are selected for data caching. By considering both data lifetime and node location, we determine the suitable caching location for each data class separately. In addition, we remove data that has a short lifetime and is not suitable for caching from the caching mechanism of NDN nodes. By considering both the cache and data placements for transient data, a more comprehensive view is grasped in improving the caching performance. This issue, which has not been addressed in the available studies run on IoT data caching, can lead to the appropriate use of available storage and reduce redundancy. Eventually, the simulation results performed by the ndnSIM simulator show the proposed method could improve the cache mechanism efficiency in terms of both delay and hit ratio. Comparison results of the proposed method with CE2 and Btw indicate that this method can provide a reduction in average delay and an increase in cache hit ratio. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
IEEE Internet of Things Journal (23274662) 11(18)pp. 29819-29837
In a mobile edge computing (MEC) environment, latency and energy consumption can be reduced by offloading tasks from mobile devices to edge servers (ESs) instead of remote cloud servers. The placement of ESs closest to end users can improve Quality of Experience and Quality of Service. Additionally, the deployment of additional servers to cover each user will ensure that user requirements are met even if the designated ES is unable to provide service. Therefore, the use of additional ESs can improve network robustness. However, edge service providers tend to cover all areas of a city with a minimum number of servers to save costs. Since the coverage zones of ESs can overlap, fewer additional ESs need to be deployed to support overlapping areas, resulting in cost savings. This article examines the problem of ES placement and proposes a new model to simultaneously optimize network latency, coverage with overlap control, and operational expenditures (OPEXs) of the MEC. In addition, a binary version of the hybrid NSGA II-MOPSO algorithm called BHNM is proposed to obtain the approximated Pareto front. Results based on the real-world data set from Shanghai Telecom show that the BHNM algorithm outperforms the binary MOPSO with turbulence (BMOPSO-T) and NSGA-II algorithms in terms of Pareto front diversity. © 2024 IEEE.
IEEE Systems Journal (19379234) 17(1)pp. 1407-1418
In this article, the problem of radio resource management is addressed by adjusting the tradeoff between two key quality of experience (QoE) influencing factors, namely transmission rate, and service price. To this end, a piecewise utility function is employed to assess the user QoE in three different quality classes. It comprises the combination of two utility functions corresponding to the transmission rate and the service price. A low-complexity fuzzy-based approach is then introduced to reallocate additional resources to dissatisfied users to increase the number of satisfied users. The simulation results indicate the efficiency and applicability of the novel approach in terms of increasing the service provider revenue while providing approximately the same overall QoE and power consumption. The results also show proposed approach achieves an improvement of at least 15% in total service provider revenue on average compared to other methods. © 2007-2012 IEEE.
Future Generation Computer Systems (0167739X) 148pp. 79-92
With the advent of modern technologies and applications, NDN networks have been known as a reliable approach to meeting the needs ahead. A prominent feature of these networks is the ability to cache comprehensive content within network nodes. Separating the content from the original location reduces the content retrieval latency, network traffic, and overhead within the applications containing considerable required data volume. This approach also improves the user experience. In this article, first, the idea of intra-network caching with an approach based on the cooperation of adjacent nodes along the path is proposed to form a local virtual cluster with distributed cache space. Second, popular cached data will be shared by exchanging minimal notification messages between them to develop content storage decisions. Two new modules are defined in each node, called RIT and NCT which have offered the possibility of utilizing the resources of nodes adjacent to the delivered route. In the next stage, a novel approach is proposed to determine the content caching threshold in each node by considering the parameters affecting the content, as well as the network topology and real-time status of the nodes. Finally, the decision mechanism regarding the storage of content in each node along the path is presented by using an adaptive approach based on set thresholds. In such a manner, undesirable redundancy of content that wastes network resources will be removed, and more caching space in each area of the network will be provided. The results of simulations using ndnSim revealed that the performance of the proposed algorithm (cache hit ratio, average data delivery latency and path stretch) is better than other existing benchmark strategies. Considering other different parameters confirmed the effectiveness of the presented approach. © 2023 Elsevier B.V.
PLoS ONE (19326203) 18(5 May)
Fog computing (FC) brings a Cloud close to users and improves the quality of service and delay services. In this article, the convergence of FC and Software-Defined-Networking (SDN) has been proposed to implement complicated mechanisms of resource management. SDN has suited the practical standard for FC systems. The priority and differential flow space allocation have been applied to arrange this framework for the heterogeneous request in Machine-Type-Communications. The delay-sensitive flows are assigned to a configuration of priority queues on each Fog. Due to limited resources in the Fog, a promising solution is offloading flows to other Fogs through a decision-based SDN controller. The flow-based Fog nodes have been modeled according to the queueing theory, where polling priority algorithms have been applied to service the flows and to reduce the starvation problem in a multi-queueing model. It is observed that the percentage of delay-sensitive processed flows, the network consumption, and the average service time in the proposed mechanism are improved by about 80%, 65%, and 60%, respectively, compared to traditional Cloud computing. Therefore, the delay reductions based on the types of flows and task offloading is proposed. Copyright: © 2023 Arefian et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
IEEE Systems Journal (19379234) 17(4)pp. 5842-5853
For the purpose of video caching and low-delay video delivery to the end-users, multiaccess edge computing (MEC) servers are commonly grouped into clusters to efficiently exploit the limited storage resources of the MEC servers. In this article, we first introduce a methodology for analysis of the video delivery delay using the queuing theory. Our analysis shows that the video delivery delay is mainly affected by the arrival rate of the video requests. Furthermore, we show a tradeoff between the ratio of videos found in the MEC servers within the same cluster and the transmission delay of video contents. To reduce the video delivery delay, we propose a dynamic MEC server clustering (DyMECC) algorithm that determines the cluster size at each time interval by solving analytically derived equations considering the actual arrival rate of video requests. It also acts as a congestion avoidance mechanism for the communication interfaces among the MEC servers. Via simulations, we show that the DyMECC reduces the video delivery delay about 15% in light load conditions and by more than five times in heavy load conditions compared to state-of-the-art works while also reduces the load of the communication interfaces among the MEC servers by more than 45%. © 2007-2012 IEEE.
Cluster Computing (13867857) 26(5)pp. 3237-3262
Multi-Access Edge Computing (MEC) is known as a promising communication paradigm that enables IoT and 5G scenarios by using edge servers located in the proximity of end users. As an integral part of MEC, edge servers provide virtualized resources and host different MEC applications. Therefore, user equipment and IoT devices can offload tasks to edge servers instead of remote cloud data centers. A proper edge server placement strategy can significantly improve the performance of mobile applications. The optimum number of edge servers and their placement is a problem called edge server placement. This optimization problem is NP-hard. In recent years, the research community has made great efforts to solve this problem. To achieve Quality of Service (QoS), various metrics are considered in different research papers. However, a comprehensive overview of the different aspects of the Edge Server Placement Problem (ESPP) is still missing. This paper first highlights the edge server's importance and gives its applications. Then, it provides a comprehensive summary and taxonomies based on the objectives, applications, datasets, frameworks, and strategies of the different research on the placement of edge servers. Also, this paper summarizes the ESPP and other optimization problems raised as joint optimization problems. Finally, considering the capabilities and features of the MEC environment, some open issues are presented. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Ad Hoc Networks (15708705) 149
The user grouping has a significant impact on the performance of non-orthogonal multiple access (NOMA) systems. The present work is focused on downlink NOMA user grouping leveraging the type-2 fuzzy set (T2FS). The main drawback of conventional user grouping methods is low efficiency for middle users which degrades the overall system performance. To overcome this problem, a 2-step user grouping process is proposed that includes identifying group candidates and selecting the most qualified one. The first step is handled by introducing a group membership competency criterion based on T2FS modeling due to its capability to handle extra uncertainty in real-world phenomena. The second step involves adding the most qualified candidate to the desired group, given the additional power imposed on the group. The key contribution of the paper is twofold: (i) it affords a multi-stage structure to evaluate the competency of users to join a particular group relying on T2FS modeling. It allows groups of different sizes to be formed, depending on the network channel status and quality of experience (QoE) requirements, and (ii) the additional power imposed on the group is exploited as a measure to select the final group. In this regard, the analysis of interference that each user brings to others in the same group is taken into account. The performance of the proposed scheme is evaluated by simulations and compared with other methods. The obtained results indicate the efficiency of the proposed approach in terms of total power consumption and symbol error rate. © 2023 Elsevier B.V.
Fog computing (FC) is an emerging paradigm developed to increase the speed of data processing in the Internet of Things (IoT) system. In such environment, an optimal Task Scheduling (TSch) of IoT devices requests can improve the performance and performance of IoT systems. This chapter introduces a task requests scheduling method in an IoT-Fog System (IoTFS) based on Software Defined Networks (SDN) for two reasons. The first reason is the fully flexible infrastructure virtualization that uses the IoT-Fog network's TSch capabilities to work on an active platform. The second reason is to reduce latency for IoT devices. An SDN-IoT-Fog computing model is proposed, which reduces network latency and traffic overhead by using a centralized IoTFS controller and coordinating network elements in the SDN controller layer. So, a hybrid Meta-Heuristic (MH) algorithm using the combination of Aquila Optimizer (AO) and Whale Optimization Algorithm (WOA), which is called AWOA, is proposed to schedule IoT task requests and allocate FC resources to IoT task requests to reduce the task completion time of IoT devices. The purpose of the proposed SDN-based AWOA method is to optimize task Execution Time (ET), Makespan Time (MT), and Throughput Time (TT), which are investigated in this chapter to the Quality of Services (QoSs). Experiments show that the proposed SDN-based AWOA is stronger than the compared algorithms for different Evaluation Metrics (EMs). © 2024 Elsevier Inc. All rights reserved.
IEEE Access (21693536) 10pp. 37457-37476
Wireless sensor networks (WSN) are important communication components of an internet of things (IoT). With the development of IoT and the increasing number of connected devices, network structure management and maintenance face the serious challenge of energy consumption. By balancing the network load, the energy consumption can be improved effectively. In the conventional WSN architecture, the two prerequisites of the load-balancing mechanism, flexibility and adaptability, are difficult to achieve. software-defined networking (SDN) is a novel network architecture that can promote flexibility and adaptability using a centralized controller. In this paper, a novel SDN architecture aimed at reducing load distribution and prolonging lifetime is proposed, which consists of different components such as topology, BS and controller discovery, link, and virtual routing. Accordingly, a new mechanism is proposed for load-balancing routing through SDN and virtualization. Through direct monitoring of the link load information and the network running status, the employed OpenFlow protocol can determine load-balancing routing for every flow in different IoT applications. The flows in different resource applications can be directed to a base station (BS) via various routes. This implementation reduces the exchange of network status and other relevant information. Virtual routing aims to weigh forward nodes and select the best node for each IoT application. The simulation results show the distribution of load over the network in the proposed algorithm and are characterized by the balanced network energy consumption, but also it prolongs network lifetime in comparison to the LEACH, improved LEACH, and LEACH-C algorithms. © 2013 IEEE.
Wireless Personal Communications (1572834X) 126(2)pp. 1895-1914
Non-orthogonal multiple access (NOMA) is one of the promising radio access techniques for resource allocation improvement in the (5th) generation of cellular networks. Compared to orthogonal multiple access techniques, NOMA offers extra benefits, including greater spectrum efficiency which is provided through multiplexing users in the transmission power domain while using the same spectrum resources non-orthogonally. Even though NOMA uses Successive Interference Cancellation to repeal the interference among users, user grouping has shown to have a substantial impact on its performance. This performance improvement can appear in different parameters such as system capacity, data rate, or power consumption. In this paper, we propose a novel user grouping scheme for sum-rate maximization which increases the sum rate by approximately 12–25% in comparison with random user grouping and two other authenticated recent works. In addition to being matrix-based and having a polynomial time complexity, the proposed method is also able to cope with users experiencing different channel gains and powers in different sub-bands. Moreover, the proposed scheme is scalable and can be used for any number of users and sub-bands. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Today, the Internet of Things (IoT) use to collect data by sensors, and store and process them. As the IoT has limited processing and computing power, we are turning to integration of cloud and IoT. Cloud computing processes large data at high speed, but sending this large data requires a lot of bandwidth. Therefore, we use fog computing, which is close to IoT devices. In this case, the delay is reduced. Both cloud and fog computing are used to increasing performance of IoT. Job scheduling of IoT workflow requests based on cloud-fog computing plays a key role in responding to these requests. Job scheduling in order to reduce makespan time, is very important in realtime system. Also, one way to improve system performance is to reduce energy consumption. In this article, three-objective Harris Hawks Optimizer (HHO) scheduling algorithm is proposed in order to reduce makespan time, energy consumption and increase reliability. Also, dynamic voltage frequency scaling (DVFS) has been used to reduce energy consumption, which reduces frequency of the processor. Then HHO is compared with other algorithms such as Whale Optimization Algorithm (WOA), Firefly Algorithm (FA) and Particle Swarm Optimization (PSO) and proposed algorithm shows better performance on experimental data. The proposed method has achieved an average reliability of 83%, energy consumption of 14.95 KJ, and makespan of 272.5 seconds. © 2022 IEEE.
Journal of Supercomputing (15730484) 78(4)pp. 5779-5805
In recent years, a new paradigm of Content-Centric Networks has emerged, which employs the prefix content name for addressing in the context of Named Data Networking (NDN). In conventional networks, subscribers send interest packets, and publishers generate related content and publish it to the requesters. In NDN, Caching Routers (CRs), as well as publishers, can also propagate data packets toward the requesters. The NDN network flow has two opposite sides: (1) the interest packet flow, known as interest forwarding on the upstream side, (2) the content flow, known as data publishing, on the downstream side. High traffic volume can be created by the increasing number of requested packets and associated data that are leading to bottlenecks in some parts of the networks. Therefore, congestion control and avoidance are significant issues in NDN. A considerable number of congestion control methods employ forwarding interest rate adjustment coming through the forwarding side. Nevertheless, the congestion can be prevented on the data publishing side using an efficient cache placement method. Congestion avoidance is a process for controlling congestion and balancing traffic loads to make the network effective. In this paper, a Dynamic Cache Placement (DCP) method is proposed to provide congestion avoidance by dynamically relocating the content of the CRs according to the traffic volume pattern and the link capacity. The DCP method distributes the popular data to the network regions with less traffic load and more accessible routers to balance the congested routers’ traffic load. The DCP is implemented in the ndnSIM network simulator, and its performance is compared to the conventional method. Simulation results show that DCP is a fair and robust cache placement design, which successfully avoids congestion inflows with highly varying traffic demand. The proposed techniques employed in the DCP method improve the network performance in dynamic circumstances of the network. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Computer Networks (13891286) 213
In-Network caching is one of the most prominent features of Information-Centric Networks (ICN). This feature, designed to reduce the content delivery time, can improve data availability, especially during the sleeping mode of the original data producer. However, the fact that caching mechanism is simultaneously performed with data forwarding, limited processing power, and limited memory capacity of routers has posed challenges in fully profiting from this feature. The synchronization of caching mechanism with data forwarding limits the speed of decision-making in content placement policies. These challenges as well as the limitations of ICN routers have not been taken into consideration by the existing strategies for content placement of IoT data. In this paper, the restrictions for content placement policies in ICN-based IoT are outlined, and a data prioritization-based approach that prevents ICN nodes from filling their Content Store (CS) with inappropriate and inferior data is proposed to improve cache efficiency. Besides, fog computing is used as a middle layer between IoT devices and ICN networks to resolve the speed limits of decision-making and improve cache performance. In the fog layer, prioritization is performed based on the popularity and freshness of data object, and, thus, low-priority data are removed from the caching mechanism of ICN nodes. As the result shows the total number of cached data is reduced. Freshness is an essential property of multiple IoT data that significantly affects caching performance. That's why we prioritized data based on popularity and freshness. Eventually, the simulation results performed by the NDNsim simulator show that decreasing the number of data objects could improve the cache mechanism efficiency in terms of both delay and hit ratio. © 2022 Elsevier B.V.
IEEE Internet of Things Journal (23274662) 9(2)pp. 1402-1413
The Internet of Things (IoT) has significantly increased the number of terminals and network traffic. It is necessary to exploit the full capacity of the network and optimize content transfer. Despite the powerful processing and storage capabilities of base stations in 5G technology, edge caching effectively reduces content access time and duplicate traffic, thus optimizing content transfer for more storage resources. The limited memory resources and the dynamic nature of the requested content have necessitated the use of smart caching methods. Sending the required data to central servers can cause additional network overload and learning disabilities due to private data. For this reason, a hierarchical federated deep reinforcement learning (HFDRL) is proposed in this article that uses the FDRL method to predict the user's future requests and to determine the appropriate content replacement strategy. In addition, to perform learning and collaborate caching, the method by which partner devices are determined plays a key role in edge caching performance. HFDRL categorizes edge devices hierarchically, thus avoids the disadvantages of very small or large clusters, takes advantage of both. By minimizing the redundancy of content storage and latency, HFDRL improves the performance for each local base station network individually and the global network. Simulation results of the proposed method show that the hit rate and delay have improved, respectively, by an average of 55% and 67% compared to traditional methods, 40% and 56% compared to the collaborative method, and 14% and 15% compared to one-level FDRL without using hierarchical edge devices clustering. © 2014 IEEE.
PLoS ONE (19326203) 17(2 February)
The demand for long-term continuous care has led healthcare experts to focus on development challenges. On-chip energy consumption as a key challenge can be addressed by data reduction techniques. In this paper, the pseudo periodic nature of ElectroCardioGram (ECG) signals has been used to completely remove redundancy from frames. Compressing aligned QRS complexes by Compressed Sensing (CS), result in highly redundant measurement vectors. By removing this redundancy, a high cluster of near zero samples is gained. The efficiency of the proposed algorithm is assessed using the standard MIT-BIH database. The results indicate that by aligning ECG frames, the proposed technique can achieve superior reconstruction quality compared to state-of-the-art techniques for all compression ratios. This study proves that by aligning ECG frames with a 0.05% unaligned frame rate(R-peak detection error), more compression could be gained for PRD > 5% when 5-bit non-uniform quantizer is used. Furthermore, analysis done on power consumption of the proposed technique, indicates that a very good recovery performance can be gained by only consuming 4.9μW more energy per frame compared to traditional CS. Copyright: © 2022 Nasimi et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Computer Networks (13891286) 196
In the present article, we propose a virtual machine placement (VMP) algorithm for reducing power consumption in heterogeneous cloud data centers. We propose a novel model for the estimation of power consumption of datacenter's network. The proposed model is employed to estimate power consumption of a Fat-Tree network. It calculates the traffic of each network layer and uses the results to estimate the average power consumption of each switch in the network, which is used for network power calculation. Further, we employ the chemical reaction optimization (CRO) algorithm as a meta-heuristic algorithm to obtain a power-efficient mapping of virtual machines (VMs) to physical machines (PMs). Moreover, two kinds of solution encoding schemes, namely permutation-based and grouping-based encoding schemes, were utilized for representing individuals in CRO. For each encoding scheme, we designed proper operators required by the CRO for manipulating the molecules in search of more optimal solution candidates. Additionally, we modeled VMs with east–west and north–south communications, and PMs with constrained CPU, memory, and bandwidth capacity. Our network power model is integrated into the CRO algorithms to enable the estimation of both PMs and network power consumption. We compared our proposed methods with a number of similar methods. The evaluation results indicate that the proposed methods perform well and the CRO algorithm with the grouping-based encoding outperforms the rest of the methods in terms of power consumption. The evaluation results also show the significance of network power consumption. © 2021 Elsevier B.V.
Multimedia Tools and Applications (13807501) 80(10)pp. 15745-15764
User-Generated Content (UGC) is turning into the predominant type of internet traffic. Content popularity prediction plays a pivotal role in managing this large-scale traffic. As a result, popularity prediction is increasingly becoming an important area of research in computer networking. Generally, popularity prediction methods are classified into two groups, namely, feature-driven and early-stage. While feature-driven methods predict content popularity before publication, early-stage methods monitor early content popularities to forecast the future. Many papers have shown that early-stage popularity prediction performs better than feature-driven methods. In this paper, we improve the performance of early-stage popularity prediction by first, classifying the data into several clusters using k-means clustering with Pearson correlation distance, and then, training a Deep-Belief Network (DBN) for each cluster. We evaluate our method using a dataset of YouTube videos and show that using a generative model such as DBN for time series prediction significantly improves the performance. Numerical results indicate that our proposed method outperforms other state-of-the-art methods by reducing Mean Absolute Percentage Error (MAPE) and mean Relative Square Error (mRSE) by up to 47.86% and 25.18%. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature.
With the advent of technology, the Internet of Things (IoT) network has been confronted with large volumes of data and production requests like critical requests. Cloud usage is not cost-effective due to the distance from the Cloud data centre. One of the best solutions to solve these problems. Use the Fog Computing auxiliary layer. Fog nodes also face processing limitations due to the large volume of requests. Inability to cooperate between Fog nodes in this layer has resulted in Fog Computing Scalability being compromised. In this research, using the method of predicting the number of critical requests and providing the required resources in Fog nodes as well as making Fog Nodes interoperable with each other by Software-Defined Network (SDN) tried to use the resources in the Fog layer to serve as much as possible to unforeseen requests. In this proposal, it has been able to reduce the service delay, utilization rate of fog layer resources and bandwidth consumption in comparison with the other two methods by 2, 6 and 13% Improve. © 2021 IEEE.
Computer Networks (13891286) 189
As a dramatic advancement in mobile communications, 5G has put together several state of the art technologies including Cognitive Radio (CR) and Quality-of-Experience (QoE). While CR is to overcome frequency scarcity, how to maintain QoE for all the connected users is one of the paramount issues in such integration, especially for multimedia communications. As a key technology for 5G, Non-Orthogonal-Multiple Access (NOMA) is combined with Orthogonal Frequency Division Multiplexing (OFDM) to improve the spectral efficiency. In this paper, the QoE requirements for typical applications in a CR platform are characterized, based on which a method for user grouping and power management in an OFDM–NOMA system is proposed. The performance of the proposed method is analyzed and evaluated by simulating a typical network. The evaluations show noticeable improvement in transmit power reduction while the requested users’ perception levels are maintained. © 2021 Elsevier B.V.
PLoS ONE (19326203) 15(8 August)
Reducing energy consumption has become a critical issue in today data centers. Reducing the number of required physical and Virtual Machines results in energy-efficiency. In this paper, to avoid the disadvantages of VM migration, a static VM placement algorithm is proposed which places VMs on hosts in a Worst-Fit-Decreasing (WFD) fashion. To reduce energy consumption further, the effect of job scheduling policy on the number of VMs needed for maintaining QoS requirements is studied. Each VM is modeled by an M/M/∗ queue in space-shared, time-shared, and hybrid job scheduling policies, and energy consumption of real-time as well as non-real-time applications is analyzed. Numerical results show that the hybrid policy outperforms space-shared and time-shared policies, in terms of energy consumption as well as Service Level Agreement (SLA) violations. Moreover, our non-migration method outperforms three different algorithms which use VM migration, in terms of reducing both energy consumption and SLA Violations. © 2020 Movahedi Nia et al.
Applied Soft Computing (15684946) 89
Energy-efficient and robust face detection and recognition scheme can be useful for many application fields such as security and surveillance in multimedia and visual sensor network (VSN). VSN consists of wireless resources-constrained nodes that are equipped with low-energy CMOS cameras for monitoring. On the one hand, captured images are meaningful multimedia-data that impose high energy consumption to be processed and transmitted. On the other hand, visual sensor (VS) is a battery-powered node with limited life-time. This situation leads to a trade-off between detection-accuracy and power-consumption. This trade-off is considered as the most major challenge for applications using multimedia data in wireless environments such as VSN. For optimizing this trade-off, a novel face detection and recognition scheme has been proposed in this paper based on VSN. In this scheme, detection phase is performed at VS and recognition phase is accomplished at the base station (sink). The contributions of this paper are in three folds: 1. Fast and energy-aware face-detection algorithm is proposed based on omitting non-human blobs and feature-based face detection in the considered human-blobs. 2. A novel energy-aware and secure algorithm for extracting light-weight discriminative vector of detected face-sequence to be sent to sink with low transmission-cost and high security level. 3. An efficient face recognition algorithm has been performed on the received vectors at the sink. The performance of our proposed scheme has been evaluated in terms of energy-consumption, detection and recognition accuracy. Experimental results, performed on standard datasets (FERET, Yale and CDnet) and on personal datasets, demonstrate the superiority of our scheme over the recent state-of-the-art methods. © 2019
Biomedical Signal Processing and Control (17468108) 60
Background and objectives: Data compression techniques have been used in order to reduce power consumption when transmitting electrocardiogram (ECG) signals in wireless body area networks (WBAN). Among these techniques, compressed sensing allows sparse or compressible signals to be encoded with only a small number of measurements. Although ECG signals are not sparse, they can be made sparse in another domain. Numerous sparsifying techniques are available, but when signal quality and energy consumption are important, existing techniques leave room for improvements. Methods: To leverage compressed sensing, we increased the sparsity of an ECG frame by removing the redundancy in a normal frame. In this study, by framing a signal according to the detected QRS complex (R peaks), consecutive frames of the signal become highly similar. This helps remove redundancy and consequently makes each frame sparse. In order to increase detection performance, different frames that symptomize a cardiovascular disease are sent uncompressed. Results: For evaluating and comparing our proposed technique with different state-of-the-art techniques two datasets that contained normal and abnormal ECG: MIT-BIH Arrhythmia Database and MIT-BIH Long Term Database were used. For performance evaluation, we performed heart rate variability (HRV) analysis as well as energy-based distortion analysis. The proposed method reaches an accuracy of 99.9%, for a compression ratio of 25. For MIT-BIH Long Term Database, the average percentage root-mean squared difference (PRD) is less than 10 for all compression ratios. Conclusion: Removing the redundancy between successive similar frames and exact transmission of dissimilar frames, the proposed method proves to be appropriate for heart rate variability analysis and abnormality detection. © 2020
International Journal of Communication Systems (10991131) 33(9)
This paper presents a method to improve the reliability and fault tolerance of distributed software-defined networks. This method is called “BIRDSDN (Byzantine-Resilient Improved Reliable Distributed Software-Defined Networks).” In BIRDSDN, a group communication is implemented among the controllers of the whole clusters. This method can detect the crash failure and Byzantine failure of any controller and undertakes a fast detection and recovery scheme to select the controllers to take over the orphan switches. BIRDSDN takes into account the reliability of the nodes considering the failure probability of intracluster and intercluster links, topology, load, and latency. The numerical results show that this approach performs better than the other approaches regarding failure detection, recovery, latency, throughput, reliability, and packet loss. © 2020 John Wiley & Sons, Ltd.
International Journal of Network Management (10557148) 30(4)
With the daily increase in the number of cloud users and the volume of submitted workloads, load balancing (LB) over clouds followed by a reduction in users' response time is emerging as a vital issue. To successfully address the LB problem, we have optimized workload distribution among virtual machines (VMs). This approach consists of two parts: Firstly, a meta-heuristic method based on biogeographical optimization for workload dispatching among VMs is introduced; secondly, we propose an innovative heuristic algorithm inspired by the “Banker algorithm” that runs in core scheduler to control and avoid VM overloads. The combination of these two (meta-)heuristic algorithms constitutes an LB approach through which we have been able to reduce the value of the makespan to a reasonable time frame. Moreover, an information base repository (IBR) is introduced to maintain the online processing status of physical machines (PMs) and VMs. In our approach, data stored in IBR are retrieved when needed. This approach is compared with well-known (non-)evolutionary approaches, such as round-robin, max-min, MGGS, and TBSLB-PSO. Experimental results reveal that our proposed approach outperforms its counterparts in a heterogeneous environment when the resources are smaller than the workloads. Moreover, the utilization of physical resources gradually increases. Therefore, optimal workload scheduling, as well as the lack of overload occurrence, results in a reduction in makespan. © 2020 John Wiley & Sons, Ltd.
Peer-to-Peer Networking and Applications (19366450) 12(5)pp. 1466-1475
In recent years, even though there has been a lot of progress in automotive industry and their offered services but not all of their computational capacities have yet been used. The vehicle’s onboard computation capacity is underutilized, the power which can be used efficiently and it significantly reduces energy consumption. Considering the novelty of Vehicular Cloud Computing (VCC), the problems like its real cost and different kind of resource allocations in different applications remain an unexplored area. The mentioned problems with the global need for energy management have motivated us to propose an efficient model that considers expenses and response times which also appropriately utilizes onboard computation capacity for VCC. The proposed model is using VCC in a manner that the onboard computational capability is fully used. Since offloading tasks to Vehicular Cloud and remote cloud have additional cost, the goal is to do tasks locally and offload fewer tasks to the vehicular cloud and remote cloud. The model prioritizes computing resources and uses the onboard computing power, which was often ignored in the previous studies. Onboard computing resource provides reasonable response time and makes the model economically beneficial. After the model presentation and structure, simulation of the proposed model with the CloudAnalyst software and the results are presented and compared with appropriate references at the end. The results show that the proposed model can show a view of VCC with its advantages and disadvantages in a practical manner, it also displays the statistical data which compared to other scenarios, shows the superiority of the model. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.
Multimedia Tools and Applications (13807501) 78(13)pp. 18921-18941
Energy-efficiency in visual surveillance is the most important issue for wireless multimedia sensor network (WMSN) due to its energy-constraints. This paper addresses the trade-off between detection-accuracy and power-consumption by presenting an energy-aware scheme for detecting moving target based on clustered WMSN. The contributions of this paper are as follows; 1- An adaptive clustering and nodes activation approach is proposed based on residual energy of detecting nodes and the location of the object at the camera’s field of view (FoV). 2- An effective cooperative features-pyramid construction method for collaborative target identification with low communication cost. 3- An in-network collaboration mechanism for cooperative detection of the target is proposed. The performance of this scheme is evaluated using both standard datasets and personal recorded videos in terms of detection-accuracy and power-consumption. Compared with state-of-the-art methods, our proposed strategy greatly reduces energy-consumption and saves more than 65% of the network-energy. Detection-accuracy rate of our strategy is 11% better than other recent works. We have increased the Precision of classification up to 49% and 65% and the Recall of classification up to 53% and 71% for specific-target and object-type respectively. These results demonstrate the superiority of our scheme over the recent state-of-the-art works. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
International Journal of Communication Systems (10991131) 32(9)
The fact that network users mostly look for content regardless of its location has led to the creation of information-centric networks with NDN (named data networking project) as the most famous instance. NDN can be implemented in any type of network including MANET (mobile ad-hoc network) which can be easily created among a collection of smartphones. The first step for content retrieval in NDN is propagating interest packets which has a dramatic effect on energy consumption because of wireless communications. Methods have been devised for limiting the amount of packet propagation. But they are not appropriate for smartphones either because they require multiple WiFi interfaces which is not available in usual smartphones or because they require the exact position of nodes which conflicts privacy. In our proposed approach, a single WiFi interface is used, and mobile nodes will share only an imprecise version of their current and their predicted next location, which complies much better with privacy. Using this information, the amount of interest and data packets will decrease of more than 15%. When location information is spread through the network, this reduction will be almost doubled. © 2019 John Wiley & Sons, Ltd.
Wireless Personal Communications (1572834X) 109(1)pp. 645-656
Software-Defined Networks (SDNs) are developed to compensate the complicated function of the controlling parts of the given network elements and making the scalability easier. In SDNs, the controlling operations are implemented by a logically centralized controller, where the occurrence of failure resulting in separation of the control and data plane is inevitable. Since the reliability of centralized controllers is low due to their being of the single point of failure, the focus of this study is to improve the reliability of the SDNs with distributed controllers. In this article, the wide networks are partitioned into smaller subnetworks, where each one is being controlled by a controller in order to reduce the failure effect(s). In each subnetwork, its reliability is calculated by considering the number and degree of nodes and the loss rate of the links and then is transmitted among the controllers through the Leader Election and Dijkstra Algorithms. Afterward, the controller with the highest reliability rate is considered as the coordinator through the newly proposed Coordinator Finder Algorithm. In practice, when a controller fails, the coordinator would choose the appropriate controller for its subnetwork in a transitory manner, where the fault tolerance and accuracy would be improved and the latency would reduce. A newly designed Colored Petri-Net is applied to verify this proposed method. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
Journal Of Information Systems And Telecommunication (23221437) 7(2)pp. 96-109
By switching the computational load from mobile devices to the cloud, Mobile Cloud Computing (MCC) allows mobile devices to offer a wider range of functionalities. There are several issues in using mobile devices as resource providers, including unstable wireless connections, limited energy capacity, and frequent location changes. Fault tolerance and reliable resource allocation are among the challenges encountered by mobile service providers in MCC. In this paper, a new reliable resource allocation and fault tolerance mechanism is proposed in order to apply a fully distributed resource allocation algorithm without exploiting any central component. The objective is to improve the reliability of mobile resources. The proposed approach involves two steps: (1) Predicting device status by gathering contextual information and applying TOPSIS to prevent faults caused by volatility of mobile devices, and (2) Adapting replication and checkpointing methods to fault tolerance. A context-aware reliable offloading middleware is developed to collect contextual information and manage the offloading process. To evaluate the proposed method, several experiments are run in a real environment. The results indicate improvements in success rates, completion time, and energy consumption for tasks with high computational loads. © 2019 Iranian Academic Center for Education, Culture and Research.
Physica A: Statistical Mechanics and its Applications (03784371) 501pp. 12-23
Link prediction is a fundamental problem in social network analysis. There exist a variety of techniques for link prediction which applies the similarity measures to estimate proximity of vertices in the network. Complex networks like social networks contain structural units named network motifs. In this study, a newly developed similarity measure is proposed where these structural units are applied as the source of similarity estimation. This similarity measure is tested through a supervised learning experiment framework, where other similarity measures are compared with this similarity measure. The classification model trained with this similarity measure outperforms others of its kind. © 2018 Elsevier B.V.
Journal of Intelligent and Fuzzy Systems (18758967) 34(4)pp. 2667-2678
Link prediction is the problem of inferring future interactions among existing network members based on available knowledge. Computing similarity between a node pair is a known solution for link prediction. This article proposes some new similarity measures. Some of them use nodes' recency of activities, some weights of edges and some fusion of both in their calculation. A new definition of recency is provided here. A supervised learning method that applies a range of network properties and nodes similarity measures as its features set is developed here for experiments. The results of the experiments indicate that using proposed similarity measures would improve the performance of the link prediction. © 2018 - IOS Press and the authors. All rights reserved.
Wireless Networks (10220038) 24(1)pp. 283-294
Human location prediction has been a matter of concern for several years due to its many applications. It has become more important nowadays because of prevalence of mobile devices which have adequate tools for inferring location. Different approaches for making this prediction could be divided into three categories, based on the movement history they use. These include history of mobile user himself, history of all mobile users in a place, and history of only related mobile users. Besides the problem of limiting shared data to only required data, preserving privacy is the matter of concern for persuading mobile users to share their data. In this paper we have proposed a new method in which the amount of the shared data is decreased to a minimum, and only the data which will improve the partner’s prediction will be shared. Our method preserves privacy by blurring the shared data up to different degrees. The experimental results show that regardless of amount of blurring, as long as the user movement is not lost because of blurring, the accuracy of prediction will be improved about 7 %. © 2016, Springer Science+Business Media New York.
Computer Networks (13891286) 133pp. 195-211
In Software-Defined Networks (SDNs) the role of the centralized controller is crucial, and thus it becomes a single point of failure. In this work, a distributed controller architecture is explored as a possible solution to improve fault tolerance. A network partitioning strategy, with small subnetworks, each with its own Master controller, is combined with the use of Slave controllers for recovery aims. A novel formula is proposed to calculate the reliability rate of each subnetwork, based on the load and considering the number and degree of the nodes as well as the loss rate of the links. The reliability rates are shared among the controllers through a newly-designed East/West bound interface, to select the coordinator for the whole network. This proposed method is called “Reliable Distributed SDN (RDSDN).” In RDSDN, the failure of controllers is detected by the coordinator that may undertake a fast recovery scheme to replace them. The numerical results prove performance improvement achievable with the adoption of the RDSDN and show that this approach performs better regarding failure recovery compared to methods used in related research. © 2018 Elsevier B.V.
International Journal of Network Management (10557148) 27(5)
Virtual networks may be used in a cloud data center to provide personalized networking services for the applications running in the cloud. As the virtual networks come to the data center and leave it, the data center network load may become unbalanced where some parts of the data center has accommodated many virtual networks while a few virtual networks are mapped to other parts of the data center. This situation may lead to packet loss and service level agreement violations in an oversubscribed data center. This unbalanced state of the load in data center can be resolved by migration of virtual networks from overloaded parts of the data center to places where the load is in a lower level. This paper presents implementation details of the prototype of a system that provides virtual networking service in a cloud data center and focuses primarily on virtual network migration as a means for controlling the state of load in data center. Experimental results show that the system has an acceptable performance in reducing the packet loss ratio and keeping the load in a balanced state. Copyright © 2017 John Wiley & Sons, Ltd.
Journal of Intelligent and Fuzzy Systems (18758967) 32(6)pp. 3987-3998
Finding community structures in online social networks is an important methodology for understanding the internal organization of users and actions. Most previous studies have focused on structural properties to detect communities. They do not analyze the information gathered from the posting activities of members of social networks, nor do they consider overlapping communities. To tackle these two drawbacks, a new overlapping community detection method involving social activities and semantic analysis is proposed. This work applies a fuzzy membership to detect overlapping communities with different extent and run semantic analysis to include information contained in posts. The available resource description format contributes to research in social networks. Based on this new understanding of social networks, this approach can be adopted for large online social networks and for social portals, such as forums, that are not based on network topology. The efficiency and feasibility of this method is verified by the available experimental analysis. The results obtained by the tests on real networks indicate that the proposed approach can be effective in discovering labelled and overlapping communities with a high amount of modularity. This approach is fast enough to process very large and dense social networks. © 2017-IOS Press and the authors. All rights reserved.
Peer-to-Peer Networking and Applications (19366450) 10(4)pp. 1021-1033
The communication network of a Smart Grid has a three-level hierarchical structure consisting of Home Area Network (HAN), Neighborhood Area Network (NAN) and Wide Area Network (WAN). Wireless communication, due to its advantages, is identified as a potential candidate for Smart Grid communications, especially in HAN and NAN. However, wireless transmission is inherently unreliable, whereas communication reliability is one of the fundamental requirements of Smart Grid applications. In this paper, a two-layer communication model based on IEEE reference grids is considered for NAN and a method based on transmission redundancy is proposed to improve the reliability of wireless communications in NAN, while the communication delay requirement of the Smart Grid is considered as a restriction. The proposed method finds the optimum number of transmissions at each hop with respect to the loss probability and total delay constraints. Comparing the proposed method to the case of an equal number of transmissions for all the hops, it is shown by analysis that the proposed method achieves a superior reliability while meeting the delay requirement. In addition, the simulation-based evaluation of the proposed method supports the validity of the results obtained from the analytical model. © 2016, Springer Science+Business Media New York.
One of the most widely used applications of wireless networks is in unmanned aeronautical ad-hoc networks. In these networks, the flying nodes in the mission must send their information to the ground base. If a UAV is outside the coverage of the ground base, it loses its connection. The solution is to send the information to the neighboring nodes. These neighboring nodes redirect the information to the ground base. Due to high node dynamics and rapid changes in network topology, one of the biggest concerns in these networks is routing between nodes. Previous routing methods, although making improvement in the overall performance in these networks led to routing overhead and network delay. In the present study a new routing method is introduced. In this new method a routing algorithm is presented with a focus on improving the packet delivery ratio and throughput therefore reducing the end to end delay and network overhead. In the proposed method instead of using only one route between nodes, all discovered routes in the network are kept in the nodes routing table. Then the best route is used as the first proposed route between the source and destination nodes and after failing this route, the second route is utilized immediately. This decreases the broadcasting of route discovery packets through the network. According the simulation results, the proposed method has proved more efficient. There has been an increase in packet delivery ratio by 4% in average, in end to end delay approximately by 30% and in the throughput ratio of the network by 9% in comparison with other methods in different scenarios. © 2017 IEEE.
Journal of Supercomputing (15730484) 73(2)pp. 866-886
A possible candidate for implementing smart grid neighborhood area network (NAN) is wireless communications because of its advantages, such as flexibility and low installation and maintenance cost. However, one of the problems with wireless communication is its unreliability, whereas communication reliability is a basic requirement of smart grid applications. In this paper, we consider a wireless mesh network for NAN and then propose a method on the basis of hop-by-hop ARQ that achieves the required data communication reliability in wireless NAN and, at the same time, satisfies the communication latency constraint which is another requirement of smart grid applications. The proposed approach provides the optimal number of allowed retransmissions at each hop considering the loss probabilities of all hops and end-to-end delay constraint. Comparing to the typical application of ARQ in which the number of retransmissions permitted is the same at each hop, it is demonstrated that the proposed approach achieves a higher level of reliability while meeting the delay constraint. Moreover, the proposed method is evaluated by simulation, and the results are consistent with the results of the theoretical model. © 2016, Springer Science+Business Media New York.
Peer-to-Peer Networking and Applications (19366450) 10(4)pp. 1051-1062
BitTorrent in efficiency content distribution is a major concern efficiency among the researchers of this field with respect to streaming video on demand (VoD) production. BitTorrent is not appropriate for real-time applications; therefore, in order to apply it in VoD it should go through the necessary changes. Most of the available studies have greatly focused on changes in methods regarding chunk and peer selections method regarding BitTorrent, which proposed methods have improved the quality of VoD to a certain degree, while, the effect of chunk size on quality of video has been of less concern among them. Noting the fact that the buffer is used on VoD, the specified time for filling the buffer would allow for appropriate management of the chunk’s length. The Bit error rate and the time overhead of the operating algorithm parameters, somehow effect the chunk size. Because of bit error rate, the probability of correctly received chunks with great length is much less, that is, offering shorter pieces, while these pieces would lead to formation of more pieces in a buffer. The results indicate that a specific amount of time is required for obtaining the buffer’s content, and it must be dividable into more chunks. Running algorithms for each chunk generate a greater overhead which would result in of the QoS reduction. This overhead would make the bigger pieces perform better. As for the opposite impacts of these two parameters on the chunk size, in this article the optimal length of the chunks is found by considering both the effective characteristic. This optimal length is an established balance between the correctly received chunks’ rate and the greater rate of the buffer context obtained in a specified time. © 2016, Springer Science+Business Media New York.
Journal of the Chinese Institute of Engineers, Transactions of the Chinese Institute of Engineers,Series A/Chung-kuo Kung Ch'eng Hsuch K'an (02533839) 39(1)pp. 49-56
Grid computing is comprised of many distributed nodes in order to compute and analyze a set of distributed data. To improve the processing performance, an appropriate load-balancing algorithm is required to equally distribute loads among the grids nodes. In this article, an algorithm based on ant colony optimization is proposed to deal with load-balancing problems. In this approach, when an ant reaches a node, the ants table and the nodes table exchange their information and update each other. In order to move to the most appropriate node, the ant selects the next node from the current nodes table according to the nodes loads and their CPU rates. This process is continued until the ant passes the predefined steps. The experimental results show that while implementing the proposed algorithm to the grid environment, increasing the number of jobs and their length has insignificant impact on the system response time. © 2015 The Chinese Institute of Engineers.
Due to the rapid growth in the number of cloud users and the increment of data center users as the basis of clouds thereof, an optimal task scheduling problem would emerge as a vital issue in near future. Since, the complexity of optimal task scheduling nature, which is NP-Complete, the evolutionary algorithms render better performance than simple gradient-based algorithms. In the proposed approach, an evolutionary algorithm based on Biogeography-Based Optimization is applied to achieve optimal task scheduling in data centers. Workloads are distributed over virtual machines in a manner that total execution time (makespan) is minimized. An Information Base Repository (IBR) is considered and applied in order to store the online Virtual Machines load status. The IBR and the workloads information submitted to the data center are applied first to draw decisions for choosing which one of the VMs will be the receptive of the submitted workload; next, forwards the workload to the specified VM. The VM available resources of Memory, Bandwidth, storage and VM CPU Million Instruction Per Second are considered to find the optimal dispatching solution. Simulation results indicate that an increase in the number of VMs, would not change the time of getting optimal solution in a drastic manner and the covergence time increases in a slow graduation compared with task scheduling approaches, which is based on Genetic Optimization and Particle Swarm Optimization. So the total workload will be distributed in an optimal manner. © 2016 IEEE.
Malaysian Journal Of Computer Science (01279084) 29(3)pp. 196-206
Stagnation is one of the complicated issues in Grid computing systems, which is caused by random arrival of tasks and heterogeneous resources. Stagnation occurs when a large number of submitted tasks are assigned to a specific resource and make it overflow. To prevent this scenario, a load balancing algorithm based on Ant Colony algorithm and Max-min technique is proposed in this paper. In the proposed algorithm, the resource manager of the system finds the best resource for a submitted task according to a matrix that indicates the characteristics of all resources as pheromone values. By choosing the best resource for the submitted task, a local pheromone update is applied to the selected one to reduce the tendency of being selected by onward new tasks. After this assigned task is executed properly, a global pheromone update is performed to renew the status of all resources for the next submitted tasks. To avoid stagnation, a comparison between a predefined threshold and the pheromone value of each resource is performed to keep the number of assigned tasks below this threshold. Due to harmonizing the resources' characteristics and tasks, the proposed algorithm is able to reduce the response time of the submitted tasks while it is simple to be implemented.
Technological developments alongside VLSI achievements enable mobile devices to be equipped with multiple radio interfaces which is known as multihoming. On the other hand, the combination of various wireless access technologies, known as Next Generation Wireless Networks (NGWNs) has been introduced to provide continuous connection to mobile devices in any time and location. Cognitive radio networks as a part of NGWNs aroused to overcome spectrum inefficiency and spectrum scarcity issues. In order to provide seamless and ubiquitous connection across heterogeneous wireless access networks in the context of cognitive radio networks, utilizing Mobile IPv6 is beneficial. In this paper, a mobile device equipped with two radio interfaces is considered in order to evaluate performance of spectrum handover in terms of handover latency. The analytical results show that the proposed model can achieve better performance compared to other related mobility management protocols mainly in terms of handover latency. © 2016 IEEE.
Journal of Information Science (01655515) 42(2)pp. 166-178
The issue of detecting large communities in online social networks is the subject of a wide range of studies in order to explore the network sub-structure. Most of the existing studies are concerned with network topology with no emphasis on active communities among the large online social networks and social portals, which are not based on network topology like forums. Here, new semantic community detection is proposed by focusing on user attributes instead of network topology. In the proposed approach, a network of user activities is established and weighted through semantic data. Furthermore, consistent extended label propagation algorithm is presented. Doing so, semantic representations of active communities are refined and labelled with user-generated tags that are available in web.2. The results show that the proposed semantic algorithm is able to significantly improve the modularity compared with three previously proposed algorithms. © Chartered Institute of Library and Information Professionals.
In this paper, a novel Inter-Cell Interference Coordination (ICIC) scheme is proposed to mitigate the downlink inter-cell interference (ICI) in Long Term Evolution (LTE) networks. The proposed method partitions the cells into 3 regions and uses predefined resource allocation schemes to assign the resources to each cell of the cluster. The proposed method also introduces an interference balancing method between neighboring cells based on an inter-cell coordination mechanism. Simulation results show that the proposed scheme is better than compared schemes. The results also show that the scheme can greatly improve the spectral efficiency of system and users although it reduces the user level fairness. © 2015 IEEE.
Intelligent Data Analysis (1088467X) 19(1)pp. 109-126
Social tagging provides an effective way for users to organize, manage, share and search for various kinds of resources. These tagging systems have resulted in more and more users providing an increasing amount of information about themselves that could be exploited for recommendation purposes. However, since social tags are generated by users in an uncontrolled way, they can be noisy and unreliable and thus exploiting them for recommendation is a non-trivial task. In this article,a new recommender system is proposed based on the similarities between user and item profiles. The approach here is to generate user and item profiles by discovering frequent user-generated tag patterns. We present a method for finding the underlying meanings (concepts) of the tags, mapping them to semantic entities belonging to external knowledge bases, namely WordNet and Wikipedia, through the exploitation of ontologies created within the W3C Linking Open Data initiative. In this way, the tag-base profiles are upgraded to semantic profiles by replacing tags with the corresponding ontology concepts. In addition, we further improve the semantic profiles through enriching them with a semantic spreading mechanism. To evaluate the performance of this proposed approach, a real dataset from The Del.icio.us website is used for empirical experiment. Experimental results demonstrate that the proposed approach provides a better representation of user interests and achieves better recommendation results in terms of precision and ranking accuracy as compared to existing methods. We further investigate the recommendation performance of the proposed approach in face of the cold start problem and the result confirms that the proposed approach can indeed be a remedy for the problem of cold start users and hence improving the quality of © 2015 - IOS Press and the authors. All rights reserved.
Intelligent Data Analysis (1088467X) 18(5)pp. 953-972
With the rapid increasing rate of the high volume of social web contents due to the growing popularity of social media services, significant attention has been drawn towards recommender systems i.e. systems, that offer recommendations to users on items appropriate to their requirements. To offer suitable recommendations, the systems need comprehensive user and item models that would be able to provide thorough understanding of their characteristics and preferences. In this article, a new recommender system is proposed based on the similarities between user and item profiles. The approach here is to generate user and item profiles by discovering frequent user-generated tag patterns, and to enrich each individual profile by a two-phase profile enrichment procedure. The profiles are extended by association rules discovered through the association rule mining process. The user/item profiles are further enriched through collaboration with other similar user/item profiles. To evaluate the performance of this proposed approach, a real dataset from The Del.icio.us website is used for empirical experiment. Experimental result s demonstrate that the proposed approach provides a better representation of user interests and achieves better recommendation results in terms of precision and ranking accuracy as compared to existing methods. © 2014 - IOS Press and the authors. All rights reserved.
Journal of Information Science (01655515) 40(5)pp. 594-610
Social tagging has revolutionized the social and personal experience of users across numerous web platforms by enabling the organizing, managing, sharing and searching of web data. The extensive amount of information generated by tagging systems can be utilized for recommendation purposes. However, the unregulated creation of social tags by users can produce a great deal of noise and the tags can be unreliable; thus, exploiting them for recommendation is a nontrivial task. In this study, a new recommender system is proposed based on the similarities between user and item profiles. The approach applied is to generate user and item profiles by discovering tag patterns that are frequently generated by users. These tag patterns are categorized into irrelevant patterns and relevant patterns which represent diverse user preferences in terms of likes and dislikes. Furthermore, presented here is a method for translating these tag-based profiles into semantic profiles by determining the underlying meaning(s) of the tags, and mapping them to semantic entities belonging to external knowledge bases. To alleviate the cold start and overspecialization problems, semantic profiles are enriched in two phases: (a) using a semantic spreading mechanism and then (b) inheriting the preferences of similar users. Experiment indicates that this approach not only provides a better representation of user interests, but also achieves a better recommendation result when compared with existing methods. The performance of the proposed recommendation method is investigated in the face of the cold start problem, the results of which confirm that it can indeed remedy the problem for early adopters, hence improving overall recommendation quality. © The Author(s) 2014.
The appearance of social networks has been one of the most important events in the last decade. A social network is a network of interaction and communication whose nodes are its members and its edges are the relation between them. An outstanding concept in social networks is the concept of social influence which is of great significance. Social influence is the behavioral transformation in a person, caused by his/her relationship with other people, organizations and the network. Today, the concept of social influence is used in viral marketing. The aim of viral marketing is to find a small subset of influential users in the social networks for marketing a product. The aim of this paper is to offer an algorithm in order to choose expert and influential users in social networks. This algorithm uses leader discovery method in social networks for choosing expert and influential users. This method chooses those users as the expert and influential leaders of the social networks, who are influential among the other members and also those users that have enough specialty and knowledge about the marketing product. Consequently, the process of selecting expert and influential users for the algorithm would be quick and also the selected leaders are influential and expert enough to do marketing and advertising in the social network. © 2014 IEEE.
Using virtual networks is a good solution to provide customized networking services in the cloud and to isolate the performance of applications running in the cloud. Cloud providers usually oversubscribe their data centers to reduce the costs imposed to customers and naturally, oversubscription can lead to network overload. If the situation is not managed carefully, it can hurt the customers' experience with the cloud and can also violate the service level agreement between the provider and the customers. This work presents the prototype of a centralized system which is implemented over a software defined data center network and provides a virtual network service that is sensitive to the load in the data center and tries to balance the load by reconfiguring the mapping of the virtual links. So, in an oversubscribed data center, the effects of any virtual network's traffic on other virtual networks' traffic would be reduced. The results of the experiments show that the system can reduce the packet loss in the data center and can increase the aggregate bandwidth that each virtual network receives over its whole life in the data center. © 2014 IEEE.
In recent decades, due to limitation of radio spectrum and increasing users in wireless networks, spectral efficiency has been a considerable issue in literatures. In addition, energy consumption is rapidly increasing in wireless communications due to growing demand for wireless technologies especially cellular networks. Therefore, energy efficiency has become an important issue in such networks. This contribution proposes an approach to improve energy efficiency while provides minimum spectrum required for an Orthogonal Frequency Division Multiple Access (OFDMA) based, multi relay, multi user cellular network. An objective function is formulated initially as a ratio of spectral efficiency and power, and then Interior Point Method is employed to solve the optimization problem. In this paper, power allocation and subcarrier allocation are formulated jointly to provide minimum required spectrum and maximum energy efficiency in the system. In addition, Inter Cell Interference effect on spectral efficiency has studied. The results show that considering the minimum Spectral Efficiency provides required Quality of Service for users, however in low amount of maximum available power, Energy Efficiency is decreased. © 2014 IEEE.
Online social networks (OSNs) are websites that allow users to build connections and relationships to other Internet users. Social networks store information remotely, rather than on a user's personal computer. They can be used to keep in touch with friends, make new contacts and find people with similar interests and ideas. Nowadays the popularity of online social networks is growing rapidly. Many people besides friends and acquaintances are interested in the information people post on social networks. Identity thieves, scam artists, debt collectors, stalkers, and corporations looking for a market advantage are using social networks to gather information about consumers. Companies that operate social networks are themselves collecting a variety of data about their users, both to personalize the services for the users and to sell to advertisers. The concern of leakage of privacy and security is extremely growing in social networks in these days. The identity theft attacks (ICAs) by creating clone identities in OSNs try to steal users' personal information and nowadays it is very important in cyberspace. If no protection mechanism is applied it effects on users' activity, trust and reliance relations that establish with other users. In this paper, first profile cloning and identity theft attack are introduced, and then a framework for detection suspicious identity is proposed. This approach is based on attribute similarity and friend network similarity. According to similarity measures which are computed in each step and by having predetermined threshold, it will be decided which profile is clone which one is genuine. © 2013 IEEE.
Web services are increasingly used to integrate and build business application on the internet. Failure of web services is not acceptable in many situations such as online banking, so fault tolerance is a key challenge of web services. Web service architecture still lacks facilities to support fault tolerance. This paper proposes a fault tolerant architecture for web services by increasing the reliability and availability, the architecture is based on application-level and transport-level logging of requests and replies, N-Version and active replication techniques. The proposed architecture is client transparent and provides fault tolerance even for requests being processed at the time of server failure. © 2011 IEEE.
In recent years, the field of Web service security has evolved rapidly and various security technologies and standards have been proposed. We found from our investigation that there is a WSDL threat, hitherto not discussed in Web service security literature but equally important. WSDL documents are the guidebook for attacking and hacking the Web services. Since a WSDL document contains explicit instructions on how to communicate private application, they can cause a series security breach if the Web services are compromised. To the best of our knowledge, all standards which were presented by now tried to defend security problems of SOAP messages which are transferred between Web services. This paper is bringing into focus enhancing security of Web service's WSDL file. It purposes a model for encrypting WSDL document to handle its security problem. This solution is suitable for Web services which have critical rules according their policies and their WSDL faced with hacking problems. © 2011 IEEE.
Journal of Artificial Intelligence (discontinued) (19945450) 4(1)pp. 1-11
In this study a four-phased process towards classification of negotiation policies is being represented. In the first phase, negotiation methods are classified based on number of issues attending in the negotiation (one or many issues). In the second phase, relation of issues in multi-issue negotiations is being categorized as dependent or independent. By considering negotiation issues as independent ones, in the third phase, issues are being separated as common and peripheral issues. Finally, in the last phase, four negotiation policies according to kind of issues (common or peripheral) are being represented. © 2011 Asian Network for Scientific Information.
Ontology matching tries to establish semantic relations between similar elements in different ontologies to provide interoperability in the semantic web. Dealing with the problem of semantic heterogeneity is a key point in the semantic web environment. The (semi) automatic generating of mappings with respect to uncertainty is a labor intensive and error prone task. While the confidence values of mappings themselves are uncertain, how can the aggregation method between them be certain? How can we model them in a certain manner? This paper introduces a new approach for modeling uncertainty in ontology matching on the basis of fuzzy set theory and then describes an iterative algorithm that exploits group decision making solutions to aggregate opinions of matchers into a group consensus one. Thus matching systems are combined to overcome contradictory and incomplete alignments, so that the quality and accuracy of final alignment will be improved. © 2011 ACM.
Turning gait is a basic motion for humanoid robots. In this article a model free approach is presented and our emphasis is how to make robot's "turn-in-place" motion more stable and faster. In this regard we use Genetic algorithm to optimize produced signals by Fourier Series (FS) which controls joint's angels. We show the effectiveness of the proposed method through simulation and experimental results. © 2011 IEEE.
One of the most important concerns in Service Oriented Architecture (SOA) is to identify required services and their interfaces. Service interfaces are used to search required organization services or developing them from scratch. Web Service Choreography Description Language (WS-CDL) describes the message exchanges between the collaborating participants, defining how these participants must be used together to achieve a common business goal. Choreography descriptions include key information of participant service interfaces. So, they cane be sued to generate service interfaces. In this paper, an algorithm is presented for semantic and automatic generation of service interfaces participating in choreographies. In this algorithm, similar WS-CDL elements, with the same semantic and functionality properties, are combined to achieve a more summarized and flexible choreography description. This choreography description is then used to derive available service interfaces. This algorithm aims to help developers to facilitate, automate and speedup development process of SOA-based software systems. © 2010 IEEE.
Journal of Theoretical and Applied Information Technology (18173195) 14(2)pp. 135-140
Quick changes in requirements and opportunities in world market needs different levels of cross-organizational collaboration for integrating distributed information systems, information sharing and coordination of organizational processes. Nowadays, Web Services are the most common technology to meet these requirements. Web Services Choreography Description Language (WS-CDL), a World Wide Web Consortium (W3C) choreography-based standard, describes how a number of services coordinate to obtain the goal of such collaboration. Only a few WS-CDL based executive models have been proposed so far. Software Agents are other alternatives for solving Inter-Organization coordination problems. This paper presents an execution framework for WS-CDL using software agents. This framework provides the Web Services collaboration layer based on choreography model and automatically of agent generation of WS CDL. It also follows Web Services stack and native features of agents and Web Services. © 2005 - 2010 JATIT. All rights reserved.
Information Technology Journal (18125646) 9(2)pp. 224-235
Automated negotiation is a key form of communication in systems composed of multiple autonomous agents. One of these multi-agent environments could be e-commerce contexts in which agents participate in the trade instead of their owners and start a bargaining process. Hence, design and implementation of a scientific agent-based bargaining mechanism (based on negotiation) for such contexts is offered in this study. This bargaining mechanism which can decrease the negotiation time in specific situation, in effect, works out based on concession rate principal. Using this mechanism, not only negotiation time will be decreased, but also the gained utility will be increased (if specific criteria are satisfied). These aims are achievable using competitivecollaborative nature of suggested method. Collaborative nature results in agents to help each other to decrease negotiation time. Competitive nature also results in agents' hard efforts to achieve higher personal utility and avoid utility loss. © 2010 Asian Network for Scientific Information.
WSEAS Transactions on Computers (discontinued) (11092750) 9(4)pp. 361-371
Regarding the wide use of distributed systems in various areas, having a system with fault tolerance ability would be an import characteristic. And in designing the real time distributed systems, this seems to be more considerable. With regard using some middleware like CORBA in designing such systems, and in order to increase their compatibility, speed, performance, to simplify the network programs and other characteristics there is no supporting program to have distributed real time system and fault tolerance at the same time. In fact, adaptive means taking into account the properties of both structures so that the requirements of these two structures are met during performance and this is usually created by a trade off between specifications of both real time and fault tolerance systems. In this study, the FT-CORBA structure as a structure used for supporting fault tolerance programs as well as relative important parameters including replication style and number of replica, which play further role in improved performance and make it adaptive to real time distributed system have been reviewed. Studying these specifications a structure adaptive to real time systems with higher performance than FT-CORBA structure have been made and, finally, the implementation of the said structure and determination of the number of replica and the objects replication style as well as the significance of related parameters have been investigated.
Proceedings - International Conference on Software Engineering (02705257) pp. 88-95
Concentrating on components and connectors in traditional approaches to document software architecture causes the problems, such as high costs for architecture change and erode during architecture evolution. These problems result in a tendency to record architectural design decisions and their rationale made throughout architecting process. This tendency encourages practitioners and researchers to develop various models and related tools to model, capture, manage, share, and (re)use architectural design decisions. But there still remains a need to visualize and explore architectural design decisions due to the huge number of decisions and relationships among them in large and complex systems development. In this paper, we first make a survey on tools that support visualization of architectural design decisions, their features and deficiencies. Second we investigate how Compendium tool can be employed as a general tool to visualize architectural design decisions and their rationale. Last we present how the visualization by Compendium can improve the understandability and support the communication of architectural design in architecting process. Copyright 2010 ACM.
Journal of Theoretical and Applied Information Technology (18173195) 12(2)pp. 110-116
Service Oriented Architecture (SOA) is a paradigm for developing distributed and heterogeneous software applications within and across organizational boundaries. Choreography is a coordination model of SOA in which service collaborations to achieve a common goal are described from global point of view. One of the most important issues in SOA is identifying required services and their interfaces. Service interfaces are necessary for searching required organization services or developing them from scratch. Because of involving key information of service interfaces in choreography, it can be used in generation of service interfaces. This paper presents an algorithm for automatic interface generation of service interfaces from several choreographies using ontology. Ontology assists to conceptualize a specific domain of knowledge. This method helps developers to facilitate, automate and speedup a part of development process of SOA-based software systems. © 2005 - 2009 JATIT. All rights reserved.
Information Technology Journal (18125646) 9(1)pp. 55-60
With the development of wireless communication and mobile computing, new ways for people to interact with each other and their surrounding environment are emerging. Mobile devices, such as Personal Digital Assistants (PDAs) with wireless communication interfaces make people able to communicate directly and compete with each other via network sessions. This study develops a mobile distributed architecture for dependable mobile competitive interactions over mobile networks, meeting the requirements of fairness, scalability and responsiveness through parallel tasks. The system considers variable communication delay of traversing intervals from users located in different situations and waiting time to obtain fairness. As can be expected, such models tend to ignore the selfish behavior of users. © 2010 Asian Network for Scientific Information.
International Conference on Advanced Communication Technology, ICACT (17389445) 2pp. 1345-1350
Designing of the composite services with desired quality is an interesting challenge of the web service environments. In a QoS-aware web service composition, appropriate services with acceptable quality are selected among several function-equivalent candidate services. The selection is performed in such a way that creates a composition with optimal quality which can satisfy user's quality constraints. In this paper, the problem of QoS-aware selection of web service composition is modeled as an optimization problem. Then the Harmony Search algorithm is adopted to find an optimal or near-optimal composition which can satisfy local and global user's constraints on quality attributes. The proposed method is a rapid and lightweight approach which can be applied to large service compositions with many service candidates.
Journal of Digital Information Management (discontinued) (09727272) 8(3)pp. 160-166
Extensive use of web services has lead to introduce web service compositions to execute business workfiows. Since several function equivalent services, provided by different service providers, may be available in the Web, the problem of selection of web service compositions arises. QoS aware selection means considering the quality of the service composition in choosing constituent services. So a method is needed for computing the quality of a composition and determining the composition with the (almost) best quality. The resulting composite has optimal quality and satisfies user's quality constraints. In this paper, the problem of QoS aware selection of web service composition is modeled as an optimization problem. Then a variation of Harmony Search algorithm is used to find a near optimal composition. Harmony Search is a recently developed optimization algorithm that mimics the improvisation process. The proposed method based on Harmony Search has been applied to a large variety of composition schemas, generated by a simulation software, and the results show that the proposed method works faster, compared with other approaches.
The justification for software architectural design decisions made throughout the architecting process is necessary for understanding, (re)using, communicating, and modifying an architecture design. Although there are many existing tools to capture, store, manage, and share the architectural design decisions explicitly, there still remains a need to visualize and explore architectural design decisions and their underlying rationale. This paper investigates how Compendium tool can be employed to visualize architectural design decisions and their rationale, in order to improve the understandability and promote the communication of architectural design decisions. © 2010 ACM.
In modern networks, different applications generate various traffic types with diverse service requirements. Thereby the identification and classification of traffic play an important role for increasing the performance in network management. Primitive applications were using well-known ports in transport layer, so their traffic classification can be performed based on the port number. However, the recent applications progressively use unpredictable port numbers. Consequently the later methods are based on "deep packet inspection". Notwithstanding proper accuracy, these methods impose heavy operational load and are vulnerable to encrypted flows. The recent methods classify the traffic based on statistical packet characteristics. However, having access to a little part of statistical flow information in real-time traffic may jeopardize the performance of these methods. Regarding the advantages and disadvantages of the two later methods, in this paper we propose an approach based on payload content and statistical traffic characteristics with Naïve Bayes algorithm for real-time network traffic classification. The performance and low complexity of the propose approach confirm its competency for real-time traffic classification. ©2010 IEEE.
Journal of Theoretical and Applied Information Technology (18173195) 14(1)pp. 23-29
To increase fault tolerance in distributed database, it is better to add a backup server for each primary server in the system. It is clear that the primary server and backup server need to be connected to each other. To connect these computers to each other when they are in a long distance, it is necessary to use a lease line which needs to be charged as data is transferred. As more packets are transferred between primary and backup server, more money need to be paid for charging this line. So if number of transferred packets between these computers reduces, the company can economize in its expenditures. On the other side, when number of updating information from the primary server to backup server reduces, the number of transaction which should be performed in the backup server reduces. To achieve this goal, we introduce a new method which reduces the number of transferred packet between primary and backup server. In this method, the replicated data of primary server is used to backup mechanism. In our method the primary server sends transactions in data which are not replicated in other computers. So the transactions on the replicated data are not transferred to backup server, and as a result the numbers of transferred packet get reduce. © 2005 - 2010 JATIT. All rights reserved.
In this paper a model of negotiation in multi-agent systems is represented. Overall aim of the model is to increase the speed of agents' agreements in a way that causes an improvement in obtained utility in some cases (in comparison with ordinary methods). Based on the proposed method, each agent as one side of a negotiation, can use its peripheral issues (issues which are available in its optimal agenda, but are not discussed in the other agents' optimal agendas) to reach agreement in a shorter time. In this model which is a competitive- cooperative one indeed, each agent accepts to concede a privilege to its competitor (cooperative), and in return receive another privilege from the competitor (competitive). This operation is done in a way that not only brings about no utility loss, but also increases utility in some cases. The reason is that the probable loss will be compensated through the received privilege. We aim to provide these privileges by using peripheral issues and argument-based methods. © 2009 IADIS.
Today, the Web is a place for offering single services and developing service compositions. Increasing number of service providers has introduced the challenge of QoS-aware selection of web service compositions. In such compositions, appropriate web services are selected among several function-equivalent available services. This selection is performed in such a way that the resulting composite web service has optimal quality and satisfies user's quality requirements. In this paper, the problem of QoS-aware selection of web service composition is modeled as an optimization problem. Then a variation of Harmony Search algorithm is used to find a near-optimal composition which can satisfy local and global user's constraints on quality attributes. Compared with other approaches, the proposed method works faster and more accurately. © 2009 IEEE.
Web services provide easy access to functions or data with acceptable outlay for organizations. Web services widely cooperate and produce big distributed programs and invite users for enter and access to integrate system, its not important that this persons who are. Users can be good or bad customers or partners that want damage system or its data. So that suppression of unallowable accesses and security is one of important problems to use web services. One of existent security problems in web services is infirmity of access control systems. Different organizations have different roles so attribute or roles of users mapping between different systems is difficult. Use of methods regimentation, trust, publishing roles based on jobs, roles local mapping, role based access control and attachment of assertions with requests, is presented a method for obviation this problem and improving access control in web services. © 2009 IEEE.
In the field of software architecture, there has been a paradigm shift from describing the outcome of architecting process mostly described by component and connector (know-what) to documenting architectural design decisions and their rationale (know-how) which leads to the production of an architecture. This paradigm shift results in emergence of various models and related tools for capturing, managing and sharing architectural design decisions and their rationale explicitly. This paper analyzes existing architectural design decisions models and provides a criteria-based comparison on tools that support these models. The major contribution of this paper is twofold: to show that all of these models have a consensus on capturing the essence of an architectural design decision; and to clarify the major difference among the tools and show what desired features are missing in these tools. © 2009 IEEE.
International Review on Computers and Software (discontinued) (18286003) 4(6)pp. 672-683
In recent years, much effort has been put in finding semantic associations between items. One type of these associations is complementary association between items which determines an item that its usage is interrelated with the use of an associated or paired item such that a demand for one generates demand for the other. This association has many applications in the field of economics and marketing. This paper presents a novel contribution in this area, proposing an automatic and unsupervised method for acquiring complementary associations between products in a product catalog, framed in the context of domain ontology learning, using the Web as corpus. The paper also discusses how obtained associations can be automatically evaluated against WordNet and presents encouraging results for several categories of products. © 2009 Praise Worthy Prize S.r.l. - All rights reserved.
Journal of Applied Sciences (discontinued) (18125654) 9(6)pp. 1114-1120
In this study, the Fault Tolerance CORBA (FT-CORBA) structure as a structure used for supporting fault tolerance programs as well as relative important parameters including replication style and number of replica which play further role in improved performance and making it adaptive to real time distributed system have been reviewed. Studying these specifications have been made a structure adaptive to real time systems with higher performance than FT-CORBA structure and finally the implementing of the said structure and determination of the number of replica and the objects replication style as well as the significance of related parameters have been investigated. © 2009 Asian Network for Scientific Information.
The work presented in this paper is the application of temporal data mining for discovering hidden knowledge from medical dataset. Medical data is temporal in nature and therefore conventional data mining techniques are not suitable. This dataset contains medical records of pregnant mothers. The structure of these medical records is chain of observations taken at different times. In each observation, a set of clinical parameter is saved by midwives. The aim of this paper is mining temporal relational rules from this set of temporal interval data that can be used in early prediction and of risk in the patients. In the first part of this study a pre-processing technique is used to produce temporal interval data from primary structure of medical records. Three different analyses are studied in preprocessing phase due to the complexity of medical records and differences in the sequence of observed symptoms in various diseases. In the next phase the mining algorithm is used to extract temporal rules. The base of this algorithm is Allen's temporal relationship theory. The rules are represents as directed acyclic graphs. The generated rules can be used in diagnosis of risk full phenomena in antenatal care. Mining medical data for this case becomes very significant as many of the current maternal deaths or birth of premature newborns might be prevented by prediction and early detection of high risk patients. © 2009 IEEE.
Information Technology Journal (18125646) 8(8)pp. 1221-1227
The goal of grid computing is to achieve all kinds of resources sharing between organizations. Auctioning models are a source of solutions to the challenge of resource allocation in grid. Auction models can guarantee the interest of participants in the grid with fairness and efficiency. In this study, we modify the bidding stage using Signcryption model and a new definition of grid auction fairness is presented that is based on communication network measurement. First-price sealed auction (FPA) is used for resource management using new methods. SimGrid simulation framework is used which support auction protocols and evaluate results from users' perspective as well as from resources' perspective. The results showed that the new model has a good behavior in grid environment and security and fairness increase in auction model with this method. © 2009 Asian Network for Scientific Information.
With widespread use of distributed system in various applications, having fault tolerance structure is a significant property and this is while it makes more sense in designing of real time distributed system. With regard using some middleware like CORBA is used in the designing of such systems but, however, programs capable of having the specification of real time and fault tolerance at the same time are not supported therein. In this paper, FT-CORBA structure as a structure used for supporting fault tolerance programs as well as relative important parameters including replication style and number of replica which play further role in improved performance and making it adaptive to real time distributed system have been reviewed. Studying these specifications have been made a structure adaptive to real time systems with higher performance than FT-CORBA and, finally the implementing of the said structure and determination of the number of replica and the replication style as well as the significance of related parameters have been investigated.
The emerging wireless and mobile networks have extended electronic commerce to another research and application subject: mobile commerce. This creates new opportunities for customers to conduct business from any location at any time. One of the important uses of mobile applications is transforming the mobile phone into a mobile wallet with digital cash that supports both anonymity (as real cash) and security [3]. Hence, in this article we propose a scheme to support m-commerce transactions and provide fully anonymity for mobile users. Our scheme is based on 160 bit ECC, Whose security intensity is equal to 1024 bit RSA. The paper describes the model, infrastructure and details of the protocol. It also discusses the applying ECC on protocol, some implementation issues and restrictions in mobile payment systems. © 2008 IEEE.
Mobile database system1 is a distributed system based on the client-server diagram. Recovery in mobile database systems is more complex compared to conventional database recovery because of an unlimited geographical mobility of mobile hosts. The mobility of these units makes it tricky to store application log and access it for recovery. The other hand, if compose both logging and checkpointing, cost of recovery is improveing and support independent recovery. Checkpointing is a time-consuming process because of in order to ensure global checkpoint consistency, all process should take checkpoint. In the way, some times lead to unnecessary checkpointing and exchange many messages. Obviously reduction in time of unification log or checkpointing, lead to reduce in recovery time. It is possible with collection list of dependent process's process that want to checkpointing and embed this list in mobile agent that moves with mobile host. This paper presents an log management and Low-Latency nonblocking Checkpointing Scheme, which uses a mobile-agent-based framework to reduce recovery time. Mobile agents very suitable for this scheme because of particular properties. In our scheme checkpoint algorithm forces a minimum number of processes to take checkpoints, which this processes is dependent processes. We compare the performance of our scheme with previous schemes and show that compared to these schemes, our scheme reduces overall recovery time. © 2009 IEEE.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (03029743) 5357pp. 6-13
Applying coordination mechanisms to handle interdependencies that exist between agents in multi-agent systems (MASs), is an important issue. In this paper, two levels MAS modeling scheme and a language to describe a MAS plan based on interdependencies between agents' plans are proposed. Initially a generic study of possible interdependencies between agents in MASs is presented, followed by the formal modeling (using Colored Petri Nets) of coordination mechanisms for those dependencies. These mechanisms control the dependencies between agents to avoid unsafe interactions where individual agents' plans are merged into a global multi-agent plan. This separation, managed by the coordination mechanisms, offers more powerful modularity in MASs modeling. © 2008 Springer Berlin Heidelberg.
With large numbers of geographically dispersed clients, a centralized approach to Internet-based application development is not scalable and also not dependable. This paper presents a decentralized approach to dependable Internet based application development, consisting of a logical structuring of collaborating sub-systems of geographically apart replicated servers. Two implementations of an Internet auction, one using a centralized approach and the other using our decentralized approach, are described. To evaluate the scalability of the two approaches, a number of experiments are performed on these implementations and the results presented here.