مرتب سازی بر اساس: سال انتشار
(نزولی)
Journal of Supercomputing (15730484) 81(7)
Mobile Crowd Sensing (MCS)-based spectrum monitoring emerges to check the status of the spectrum for dynamic spectrum access. For privacy-preserving purposes, spectrum sensing reports may be sent anonymously. However, anonymous submission of reports increases the probability of fake reports by malicious participants. Also, it is necessary to assign a fair reward to encourage the honest participants, which needs to take into account participant’s reputation. In this research, a method is presented for MCS-based spectrum monitoring which uses Hyperledger Fabric and Identity Mixer (Idemix). This framework overcomes security challenges such as providing anonymity of the participants, identifying malicious participants, detecting intentional and unintentional incorrect reports, and providing a secure protocol to reward participants. An intuitive evaluation of the security features of the proposed method confirms that the proposed method withstands key threats, such as de-anonymization, participant misbehavior, privacy-compromising collusion among system entities, and reputation manipulation attack. Also, numerical evaluations show that the proposed method is superior compared to the similar centralized method in terms of delay when the number of participants is sufficiently large. Specifically, it achieves an average improvement of approximately 39% in scenarios involving 1000 to 2000 participants, and more than a twofold reduction in delay for the case with 2000 participants. Notably, this enhancement comes without a substantial increase in signaling overhead, which remains only slightly more than double that of the centralized method. Moreover, simulations show that the proposed method can successfully distinguish malicious participants from the honest ones in most scenarios. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
Expert Systems with Applications (09574174) 262
Generative Adversarial Networks (GANs) do not ensure the privacy of the training datasets and may memorize sensitive details. To maintain privacy of data during inference, various privacy-preserving GAN mechanisms have been proposed. Despite the different approaches and their characteristics, advantages, and disadvantages, there is a lack of a systematic review on them. This paper first presents a comprehensive survey on privacy-preserving mechanisms and offers a taxonomy based on their characteristics. The survey reveals that many of these mechanisms modify the GAN learning algorithm to enhance privacy, highlighting the need for theoretical and empirical analysis of the impact of these modifications on GAN convergence. Among the surveyed methods, ADAM-DPGAN is a promising approach that ensures differential privacy in GANs for both the discriminator and the generator networks when using the ADAM optimizer, by introducing appropriate noise based on the global sensitivity of discriminator parameters. Therefore, this paper conducts a theoretical and empirical analysis of the convergence of ADAM-DPGAN. In the presented theoretical analysis, assuming that simultaneous/alternating gradient descent method with ADAM optimizer converges locally to a fixed point and its operator is L-Lipschitz with L < 1, the effect of ADAM-DPGAN-based noise disturbance on local convergence is investigated and an upper bound for the convergence rate is provided. The analysis highlights the significant impact of differential privacy parameters, the number of training iterations, the discriminator's learning rate, and the ADAM hyper-parameters on the convergence rate. The theoretical analysis is further validated through empirical analysis. Both theoretical and empirical analyses reveal that a stronger privacy guarantee leads to a slower convergence, highlighting the trade-off between privacy and performance. The findings also indicate that there exists an optimal value for the number of training iterations regarding the privacy needs. The optimal settings for each parameter are calculated and outlined in the paper. © 2024 Elsevier Ltd
Information and Software Technology (09505849) 187
Context: Nowadays, developing data analysis software for the IoT domain faces challenges such as complexity, repetitive tasks, and developers’ lack of domain knowledge. To address these issues, methodologies like CRISP-DM have been introduced, providing structured guidance for data analysis. Objectives: Despite the availability of structured methodologies, building data analysis pipelines still involves managing complexity and redundancy. Model-driven approaches have been proposed to tackle these challenges but often fail to address all stages of the data analysis workflow and the interdependencies between stages and datasets comprehensively. This research introduces RAIDAD, a model-driven framework that addresses these gaps by covering all phases of the CRISP-DM methodology. Methods: RAIDAD includes a domain-specific modeling language for IoT data analysis, a graphical modeling editor, a code generation transformation engine, and a data model assistant for seamless model-data integration. These components are delivered as an Eclipse plugin. Results: The evaluation of RAIDAD is two-fold. First, a comparative operational evaluation with RapidMiner and ML-Quadrat shows RAIDAD achieves a 9.6% improvement in usability and productivity over RapidMiner and a 23% improvement over ML-Quadrat. Second, RAIDAD is compared to a general-purpose programming language, demonstrating its superiority in reducing effort and production time for IoT data analysis software. Conclusion: This comprehensive framework ensures an efficient and organized approach to data analysis, addressing key challenges in the IoT domain. Future research will focus on expanding RAIDAD's support for a wider range of data analysis and machine learning algorithms, enhancing automation capabilities, and incorporating continuous user feedback to ensure the framework evolves in line with emerging needs. © 2025 Elsevier B.V.
IEEE Access (21693536) 13pp. 6584-6593
Using a multi-hop routing protocol for unicast vehicle-to-vehicle (V2V) communications is crucial to facilitate data relaying from one vehicle to a distant one. The dynamic behavior of vehicular ad-hoc networks (VANETs) and unstable relays often cause frequent disconnections or switchovers, leading to higher latency, diminished reliability, and increased resource use. The stability of these routes depends not just on link connectivity, but also on the availability of adequate resources. Although numerous studies have explored traditional VANET routing protocols to tackle these issues, they often neglect the critical aspect of resource availability. In this study, we focus on a V2V routing method that considers resource availability, assuming the use of a geo-based resource allocation framework. In the proposed resource-aware approach, we utilize deep learning to predict vehicle trajectories and traffic load in various areas. The proposed resource-aware routing protocol aims at improving both spectrum efficiency and route stability by employing distance between vehicles, traffic load, and resource availability of areas to manage emergency messages (EMs) more efficiently and reduce unnecessary rebroadcasting and congestion. Our simulation results reveal that our proposed method surpasses traditional protocols in terms of packet reception ratio (PRR), latency, and average hop count. © 2013 IEEE.
Computer Communications (1873703X) 241
The emergence of 6th Generation (6G) cellular networks presents an opportunity to redefine Key Performance Indicators (KPIs) necessary for high-quality communications in the 2030s. 6G aims to innovate through novel architectural designs and the utilization of higher frequency bands, alongside incorporating aerial coverage to establish a three-dimensional network framework in contrast to its predecessor, 5G. Central to this innovation are Unmanned Aerial Vehicles (UAVs), which can be used as Drone Base Stations (DBSs). Despite the energy required for UAVs to hover, they can significantly decrease energy consumption and environmental impact by replacing terrestrial cellular infrastructure and switching off underutilized or inefficient Small Base Stations (SBSs) in Ultra-Dense Networks (UDNs). This work presents an energy-efficient UAV-assisted On-Off switching methodology that considers energy usage of DBSs’ backhaul links, in contrast to previous studies. By optimizing DBS placement, user association, and power control, the approach aims to improve energy efficiency. The problem is formulated as a Mixed-Integer Nonlinear Programming (MINLP) optimization, which is then decomposed into three manageable sub-problems that are solved using proposed algorithms. This methodological framework not only alleviates the complexity associated with the original problem but also enables practical implementations in energy-constrained UAV systems, ultimately leading to improved energy efficiency compared to existing approaches. Simulation results demonstrate about 90 % improvement of energy efficiency compared to prior studies even when fewer SBSs are switched off. Furthermore, the proposed approach exhibits 95 % better energy efficiency rather than previous methods when the serving time of UAVs increases. © 2025 Elsevier B.V.
Journal of Supercomputing (15730484) 81(9)
With the increasing number of internet of vehicles devices and the exceptional growth in data traffic, the licensed spectrum is faced with limitations in meeting the growing demand for cellular vehicle-to-everything (C-V2X) applications. Opportunistic utilization of unlicensed bands is regarded as a solution to this issue. However, using unlicensed bands by cellular technologies poses challenges for coexisting with other unlicensed systems. This research examines how 5G new radio operating in the unlicensed band (5G NR-U) can coexist with Wi-Fi systems. It assumes that 5G NR-U exploits the duty cycle method for managing coexistence. An optimization problem is established that exploits the estimated load of Wi-Fi systems to enhance the total throughput of the cellular network while considering the rate constraint of Wi-Fi users. In most coexistence schemes, the cellular system exploits knowledge of the Wi-Fi traffic through a given signaling channel. However, this signaling channel is not always applicable in practice. As a solution, this paper proposes an approach that exploits a federated convolutional neural network (CNN) to gauge the intensity of Wi-Fi traffic by analyzing unlicensed channel activity. Based on CNN’s prediction, a Q-learning based algorithm is then developed to solve the resource allocation problem and adjust the parameters of the duty cycle based on the estimated Wi-Fi load and C-V2X network status. Simulation results demonstrate that even without signaling exchanges, the suggested approach enhances the throughput of the cellular network by about 35% on average in scenarios with medium traffic load compared to the previous method while the required rates of Wi-Fi users are not considerably violated. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
Journal of Supercomputing (15730484) 80(2)pp. 2067-2127
In Internet-of-Things (IoT)-based healthcare systems, real-time healthcare data are gathered from patients’ sensors with limited resources and transferred to end-users through gateways and healthcare service providers. Privacy of patients is a main challenge of these systems. Although privacy has already been considered in IoT-based healthcare systems, best centralized approaches yet suffer from collusion attack. Therefore, some researchers have come up with blockchain-based solutions to protect patients’ privacy in IoT-based healthcare systems. However, those methods assume that parts of the entities along the end-to-end communication path from patients’ sensors to the end-users are trusted or even assuming no privacy threats from internal attackers. Therefore, there is a lack of a blockchain-based approach in IoT-based healthcare systems to provide privacy for patients, assuming that all system entities are untrusted. To overcome these challenges, in this paper, we leverage a three-layered hierarchical blockchain, the zero-knowledge proof (ZKP), and the ring signature method to achieve data and location privacy of patients against both internal and external attackers. In addition, the proposed method provides anonymous authentication, authorization, and scalability, which are essential features in healthcare systems. Intuitive and formal security analyses demonstrate the resilience of our scheme against various attacks such as denial of service (DoS), modification, mining, storage, and replay attacks. The proposed method is compared to a recent blockchain-based method and also a centralized privacy-preserving scheme. Compared to the similar blockchain-based method, the computational overhead and delay of the authentication and data transfer phase are about 35% and 37% higher, respectively. Instead, the proposed method reduces memory usage of gateways by about 55% and diminishes the computational overhead and delay of information access phase by about 30% and 33% compared to the previous blockchain-based method. Therefore, the proposed method does not increase overhead and end-to-end delay considerably compared to the previous blockchain-based scheme, while some other performance metrics and security features are improved. Moreover, compared to a previous centralized method, the proposed approach shows more than 25% decrease in communication overhead and 22% improvement in memory usage of gateways, in average. Although the use of the blockchain imposes more computational overhead on service providers and may increase the latency compared to the centralized approach (depending on the type of the blockchain technology that is used), these weaknesses are negligible at the expense of increased security. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Wireless Personal Communications (1572834X) 135(1)pp. 593-617
The ever-increasing demand for the wireless communications specially in sub-6 GHz frequency ranges has led to radio resource scarcity where opportunistic spectrum access is its main solution. An online spectrum decision and prediction system can assist cognitive radio users in seeking idle frequency bands for opportunistic use. However, previous studies have not considered the use of crowd-sensing technique to collect spectrum and contextual information to present a hybrid spectrum decision/prediction service. In this paper, we propose a novel cloud-based service for spectrum availability decision and prediction, which brings more contextual parameters into the decision with the aim of improving the quality of decision. Location, time, and velocity of sensing nodes, the density of buildings around sensing nodes, and weather status have been considered as context information. In the proposed method, spectrum availability data and some of the mentioned context parameters are collected through crowd-sensing. Artificial neural network (ANN) classifiers are suggested to decide about the status of spectrum bands in the proposed architecture. We also propose a spectrum prediction service in our architecture to predict the future of spectrum bands and recommend ANN and k-nearest neighbor algorithms for prediction. The proposed architecture has been implemented and evaluated. Experimental results show that using the addressed contextual information, the quality of spectrum availability decision is improved. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
In the future wireless networks, terrestrial, aerial, space, and maritime wireless networks are integrated into a unified network to meet the needs of a fully connected global network. Nowadays, vehicular communication has become one of the challenging applications of wireless networks. In this article, we aim to address the radio resource management in Cellular V2X (C-V2X) networks using Unmanned Aerial Vehicles (UAV) and Non-orthogonal multiple access (NOMA). The goal of this problem is to maximize the spectral efficiency of vehicular users in Cellular Vehicle-to-Everything (C-V2X) networks under a fronthaul constraint. To solve this problem, a two-stage approach is utilized. In the first stage, vehicles in dense area are clustered based on their geographical locations, predicted location of vehicles, and speeds. Then UAVs are deployed to serve the clusters. In the second stage, NOMA groups are formed within each cluster and radio resources are allocated to vehicles based on NOMA groups. An optimization problem is formulated and a suboptimal method is used to solve it. The performance of the proposed method is evaluated through simulations where results demonstrate superiority of proposed method in spectral efficiency, min point, and distance. © 2024 IEEE.
Ad Hoc Networks (15708705) 154
Restricted Access Window (RAW) and group-based media access are new features that are exploited by IEEE 802.11ah standard to resolve massive access problem in Internet of Things (IoT) applications. Nevertheless, inefficient device grouping and inappropriate contention resolution in a RAW may still result in collisions and consequently lead to degradation of QoS, energy efficiency, and channel utilization. In existing works, contention resolution schemes schedule all failed devices to retransmit in the next slot of the RAW, which increases the probability of collisions. In this paper, we propose a new retransmission scheme that allows the collided devices of each RAW slot to have another transmission chance in one of the next µ upcoming slots, randomly. We represent the proposed retransmission method via a probabilistic model and formulate a problem based on it, to adjust the number of slots of each RAW with the aim of increasing overall energy efficiency and overall channel utilization regarding delay constraint of devices. Solving the formulated problem, a meta-heuristic algorithm is used to adjust the number of RAW slots and so the size of each RAW regarding the results of the device grouping algorithm. To better exploit the capabilities of the proposed idea, we suggest a load-aware and distance-based device grouping algorithm that not only considers the hidden node problem, but also attends to the load balance of the groups. Simulation results show that the proposed retransmission and RAW adjustment scheme alongside the proposed device grouping algorithm, improve energy efficiency and channel utilization by 25 % and 17 % respectively, and reduce the access delay by 31 % in average, compared to the previous retransmission method. © 2023 Elsevier B.V.
Physical Communication (18744907) 66
With the growing demand for Internet of Things (IoT) applications, supporting massive access to the media is a necessary requirement in 5G cellular networks. Accommodating the stringent requirements of Ultra-Reliable Low Latency Communications (URLLC) is a challenge in massive access to the medium. The random-access procedure is of the most challenging issues in massive IoT (mIoT) networks with URLL requirements as a high number of channel access requests result in high channel access latency or low reliability. In previous works, some solutions have been proposed to solve this challenge including grant-free access, priority-based access, and grouping nodes to restrict random access requests to groups’ leaders. Particularly, previous idea that is based on grouping, clusters the devices with similar reaction against an event to a group, which is not always applicable for various IoT applications. This research proposes a novel device grouping to improve the random-access procedure of mIoT devices with URLLC requirements. In the proposed method, device grouping is accomplished based on the analysis of devices’ traffic. A similarity index is used to obtain the similarity of time series made from historical traffic patterns of devices and then, an innovative algorithm is proposed to group the devices based on this index. Grouping devices based on similar traffic patterns, provides access to the media with less complexity and more efficiency for a large number of devices. Performance of the proposed approach is evaluated using simulations and real traffic dataset. The evaluation results show higher suitability of proposed method compared to the baseline mechanism of LTE and the previous method in terms of access failures (which affects delay and reliability) and energy consumption. For a usual setting, the channel access failure decreases by about 94 % compared to the previous method and by 0.88 % compared to LTE. The energy consumption also improves by about 1.8 % compared to LTE and by 1.2 % compared to previous method. Moreover, the results show that the proposed method is appropriate for IoT applications with regular traffic patterns. © 2024 Elsevier B.V.
Computer Communications (1873703X) 214pp. 270-283
Full-duplex ultra-dense network (FD-UDN) is a promising technology in 5G mobile networks for handling the increase in network capacity. However, interference management and energy efficiency are two of the most important challenges that must be addressed. With the small base stations (SBSs) densification, employing sleeping strategy coupled with resource management becomes an effective approach to manage interference and power consumption in FD-UDN. To the best of our knowledge, this aspect has not been addressed in previous work. To this end, we develop a framework to optimize the BS sleeping and resource management with the aim of maximizing energy efficiency and maintaining the quality of service (QoS) requirements of users. The problem is formulated as a non-convex mixed-integer non-linear programming problem, which is difficult to handle. Employing the Dinkelbach method, the objective function of the problem is converted to an equivalent parametric subtractive form. Then, the problem is decoupled into two sub-problems: user association and resource allocation, as well as BSs on/off switching. The former is solved using the iterative reweighted lq-norm minimization (IRM) method, and the latter is solved using the Lagrangian dual method and the constrained concave-convex procedure (CCCP). The simulation results demonstrate that the proposed method is more effective than traditional ones in simultaneously improving the EE, reducing power consumption, and keeping fewer SBSs active, especially in high network loads. © 2023 Elsevier B.V.
2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 241-247
Predictive maintenance is a critical approach in modern industries, aiming to forecast equipment failures and reduce downtime by leveraging operational data. Traditional methods, such as time series analysis, struggle to capture complex temporal dependencies in large-scale datasets. In this study, we propose an innovative solution that integrates Long Short-Term Memory (LSTM) networks with an adaptive windowing strategy for predictive maintenance. Unlike conventional methods that rely on fixed window sizes, our approach dynamically adjusts the window size based on the data's characteristics, optimizing the temporal context provided to the model. We apply this method to the Microsoft Azure predictive maintenance dataset from Kaggle and demonstrate that the adaptive window size significantly enhances the precision of failure predictions. This research highlights the potential of combining LSTM with window size optimization to improve the accuracy and efficiency of predictive maintenance models in real-world industrial applications. © 2024 IEEE.
The high ability of generative models to generate synthetic samples with distribution similar to real data samples brings many benefits in various applications. However, one of the most major elements in the success of generative models is the data that is used to train these models, and preserving privacy of this data is necessary. However, various studies have shown that the high capacity of genera-tive models leads to memorizing the details of the training data by these models, and different attacks have been conducted against generative models which infer information about training data from trained model. Also, many privacy-preserving mechanisms have been proposed to defend against these attacks. In this chapter, after introducing the topic, the privacy attacks against generative models and rele-vant defense mechanisms are discussed. In particular, the privacy attacks and related privacy preserving methods are categorized and discussed. Then, some challenges and future research directions are examined. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
Applied Intelligence (0924669X) 53(9)pp. 11142-11161
Privacy preserving data release is a major concern of many data mining applications. Using Generative Adversarial Networks (GANs) to generate an unlimited number of synthetic samples is a popular replacement for data sharing. However, GAN models are known to implicitly memorize details of sensitive data used for training. To this end, this paper proposes ADAM-DPGAN, which guarantees differential privacy of training data for GAN models. ADAM-DPGAN specifies the maximum effect of each sensitive training record on the model parameters at each step of the learning procedure when the Adam optimizer is used, and adds appropriate noise to the parameters during the training procedure. ADAM-DPGAN leverages Rényi differential privacy account to track the spent privacy budgets. In contrast to prior work, by accurately determining the effect of each training record, this method can distort parameters more precisely and generate higher quality outputs while preserving the convergence properties of GAN counterparts without privacy leakage as proved. Through experimental evaluations on different image datasets, the ADAM-DPGAN is compared to previous methods and the superiority of the ADAM-DPGAN over the previous methods is demonstrated in terms of visual quality, realism and diversity of generated samples, convergence of training, and resistance to membership inference attacks. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
ISeCure (20083076) 15(2)pp. 139-153
Using generative models to produce unlimited synthetic samples is a popular replacement for database sharing. Generative Adversarial Network (GAN) is a popular class of generative models which generates synthetic data samples very similar to real training datasets. However, GAN models do not necessarily guarantee training privacy as these models may memorize details of training data samples. When these models are built using sensitive data, the developers should ensure that the training dataset is appropriately protected against privacy leakage. Hence, quantifying the privacy risk of these models is essential. To this end, this paper focuses on evaluating the privacy risk of publishing the generator network of GAN models. Specially, we conduct a novel generator white-box membership inference attack against GAN models that exploits accessible information about the victim model, i.e., the generator’s weights and synthetic samples, to conduct the attack. In the proposed attack, an auto-encoder is trained to determine member and non-member training records. This attack is applied to various kinds of GANs. We evaluate our attack accuracy with respect to various model types and training configurations. The results demonstrate the superior performance of the proposed attack on non-private GANs compared to previous attacks in white-box generator access. The accuracy of the proposed attack is 19% higher on average than similar work. The proposed attack, like previous attacks, has better performance for victim models that are trained with small training sets. © 2020 ISC. All rights reserved.
The widespread use of Internet of Things (IoT) systems results in a large volume of data that needs to be analyzed in real time for efficient usage. Complex Event Processing (CEP) is a technique that enables organizations to analyze data and identify complex and meaningful events. CEP uses patterns written in an Event Pattern Language (EPL) to describe an event. However, producing patterns and rules using EPL remains a significant challenge.This research benefits from data analysis and model-driven engineering to automatically produce CEP rules that allow domain experts to extract and define patterns automatically, even if unfamiliar with data analysis or CEP rule design. At the end of the paper, a case problem is solved, resulting in a coverage coefficient of 60%, an average accuracy of 82.5%, and an average precision of 82%. The results show that this combined approach is beneficial in providing a more appropriate response in less time. © 2023 IEEE.
Computer Networks (13891286) 234
The high density of small cells in the Ultra-Dense Network (UDN) has increased the capacity and the coverage of Fifth Generation (5G) cellular networks. However, with increasing the number of Small Base Stations (SBSs), energy consumption rises sharply. One suggested method to reduce energy consumption is to manage the SBS On/Off switching. Moreover, due to spectrum constraints, the Power Control and Resource Allocation (PCRA) are other significant issues in UDN, which affect the Energy Efficiency (EE) and the Spectrum Efficiency (SE). Recent works in UDN have not presented the optimal SBSs On/Off switching and PCRA technique simultaneously to maximize the EE and the SE while ensuring Quality of Service (QoS) requirements of User Equipments (UE). In this paper, a distributed method based on a multi-agent Deep Q-Network (DQN) is proposed to deal with the mentioned challenges simultaneously. Therefore, each SBS can learn a policy for managing On/Off switching and downlink PCRA using two DQNs. The proposed method seeks to optimize the EE and the SE as well as guarantee the minimum required data rate of UEs. Simulation results show that the proposed method improves the EE and the SE compared to previous solutions. Furthermore, unlike previous distributed approaches that use the UEs as learning agents, the proposed method uses the SBSs as agents. Thus, the signaling overhead and computational complexity of the UEs decrease. © 2023 Elsevier B.V.
IEEE Access (21693536) 11pp. 82601-82612
Cellular vehicle-to-everything (C-V2X) communications have gained traction as they can improve safe driving, efficiency, and convenience. However, mobility and the rising number of vehicles make the efficiency of resource allocation (RA) schemes more difficult. Different RA schemes have been proposed to share C-V2X resources effectively. Nevertheless, routing-aware RA has not attracted enough attention in the literature. This paper discusses various scenarios where routing-awareness can be employed in RA to improve the performance and proposes a routing-aware RA method that assumes a cluster-based routing for vehicle-to-infrastructure (V2I) communications and a geo-based RA for vehicle-to-vehicle (V2V) communications. V2I vehicles are grouped using the density-based spatial clustering of applications with noise (DBSCAN) algorithm and are connected to the cluster heads (CHs) using dedicated short-range communications (DSRC). In supposed scenario, CHs use cellular resource blocks (RBs) to forward vehicles' traffic toward the base station (BS). This paper proposes two heuristic algorithms that enable CHs to use some RBs already assigned to V2V communications for their V2I links without significantly affecting QoS requirements of V2V connections.The proposed algorithms take into account the formed V2I clusters and their loads, and are consequently routing-aware. Based on simulation results, the proposed algorithms improve spectrum efficiency by about 75% in average while the quality of V2V communications are maintained. © 2013 IEEE.
Personalized QoE has significant implications for businesses in terms of customer satisfaction, loyalty, and revenue generation. By delivering experiences tailored to individual users, businesses can build stronger relationships, improve customer retention, and gain a competitive edge in the marketplace. In this paper, we have attempted to use a clustering-based approach to enhance personalized QoE assessment via personalized federated learning technique. To achieve this, first, we classify users to different clusters, based on some user-related QoE influencing factors. Second, we employ independent personalized federated learning QoE predictors in clusters to assess the QoE level of the service. We conducted some experiments to compare the performance of our method to the traditional personalized federated learning based QoE assessment approach. The results demonstrate that the proposed approach increases the accuracy of QoE evaluations by about 16% in average. © 2023 IEEE.
Expert Systems with Applications (09574174) 224
Generative Adversarial Networks (GANs) are known to implicitly memorize details of sensitive data used to train them. To prevent privacy leakage, many approaches have been conducted. One of the most popular approaches is Differential Private Gradient Descent GANs (DPGD GANs), where the discriminator's gradients are clipped, and an appropriate random noise is added to the clipped gradients. In this article, a theoretical analysis of DPGD GAN convergence behavior is presented, and the effect of the clipping and noise perturbation operators on convergence properties is examined. It is proved that if the clipping bound is too small, it leads to instability in the training procedure. Then, assuming that the simultaneous/alternating gradient descent method is locally convergent to a fixed point and its operator is L-Lipschitz with L<1, the effect of noise perturbation on the last-iterate convergence rate is analyzed. Also, we show that parameters such as the privacy budget, the confidence parameter, the total number of training records, the clipping bound, the number of training iterations, and the learning rate, affect the convergence behavior of DPGD GANs. Furthermore, we confirm the effectiveness of these parameters on the convergence behavior of DPGD GANs through experimental evaluations. © 2023 Elsevier Ltd
The quality of delivering Internet of Things (IoT) traffic in IP networks is of great importance in IoT era. In this article, a Genetic Algorithm (GA)-based method is proposed to select the routing and scheduling strategy of each IP router to improve the quality of service of IoT traffic. To this aim, we first propose a method based on deep learning to distinguish IoT traffic from none-IoT ones. The trained model results in 99% accuracy on test data. Then, distinct scheduling and routing methods are suggested for these two traffic types in network routers. The aim of GA-based strategy selection is to improve the latency and reliability of IoT traffic without compromising the performance of none-IoT ones. Here, we utilize a set of scheduling algorithms including FIFO, Fair, Weighted Fair, and Priority algorithms to construct GA chromosomes. Also, a set of routing algorithms, i.e., Dijkstra, A∗, BFS, and DFS are used in definition of chromosomes. Simulation results demonstrate that the proposed method leads to a significant improvement in latency and reliability of IoT traffic. © 2023 IEEE.
Journal of Ambient Intelligence and Humanized Computing (18685145) 14(1)pp. 655-675
Mobile crowd-sensing (MCS) is a solution to provide spectrum availability information for dynamic spectrum access in cognitive radio systems. In MCS-based spectrum monitoring, participants should report the location and time of spectrum sensing in addition to the status of the spectrum bands, which raises the need for privacy-preserving. On the other hand, it is required to mitigate the possibility of fake reports sent from malicious participants that is almost handled using trust mechanisms. The trust mechanisms should be resistant to possible wrong reports which are due to channel fading and/or noise too. Moreover, some incentive mechanisms are required to encourage mobile users to participate in the crowd-sensing process. However, preserving-privacy, managing trust, and providing proper incentive mechanisms altogether is a challenge in MCS-based spectrum monitoring systems that has not been appropriately considered yet in previous work. In this paper, we propose a method that includes a privacy-preserving protocol with secure rewarding capability as well as a trust mechanism against malicious participants for MCS-based spectrum monitoring. We exploit Dempster–Shafer theory besides the reputation of participants in an anonymous manner to decide about spectrum availability. Also, we take advantage of the Gompertz function when updating the reputation of participants to better handle the spectrum sensing errors. To evaluate the proposed method, we conduct simulations to analyze and compare the proposed trust and spectrum decision mechanisms. The results show that in the proposed method, although 40% of participants were malicious, in more than 95% of cases, we were able to make the right decision about the participant's behavior compared to the majority method where only in about 85% of cases, the decision was correct. Also, we use ProVerif automatic protocol verifier to formally evaluate some security features of the proposed anonymity protocol. Moreover, we conduct some experimental analysis to validate the proposed protocol. The evaluation results demonstrate the superiority of the proposed method regarding both performance criteria and security features compared to the baseline methods. © 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
PLoS ONE (19326203) 18(10 October)
5G wireless networks are paying increasing attention to Vehicle to Everything (V2X) communications as the number of autonomous vehicles rises. In V2X applications, a number of demanding criteria such as latency, stability, and resource availability have emerged. Due to limited licensed radio resources in 5G cellular networks, Cellular V2X (C-V2X) faces challenges in serving a large number of cars and managing their network access. A reason is the unbalanced load of serving Base Stations (BSs) that makes it difficult to manage the resources of the BSs optimally regarding the frequency reuse in cells and its subsequent co-channel interference. It is while the routing protocols could help redirect the load of loaded BSs to neighboring ones. In this article, we propose a resource-aware routing protocol to mitigate this challenge. In this regard, a hybrid C-V2X/ Dedicated Short Range Communication (DSRC) vehicular network is considered. We employ cluster-based routing that enables many cars to interface with the network via some Cluster Heads (CH) using DSRC resources while the CHs send their traffic across C-V2X links to the BSs. Traditional cluster-based routings do not attend the resource availability in BSs that are supporting the clusters. Thus, our study describes an enhanced clustering method based on Density-Based Spatial Clustering of Applications with Noise (DBSCAN) that re-clusters the vehicles based on the resource availability of BSs. Simulation results show that the proposed re-clustering method improves the spectrum efficiency by at least 79%, packet delivery ratio by at least 5%, and load balance of BSs by at least 90% compared to the baseline. © 2023 Alrubaye. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Digital Communications and Networks (23528648) 9(2)pp. 534-544
The capability of a system to fulfill its mission promptly in the presence of attacks, failures, or accidents is one of the qualitative definitions of survivability. In this paper, we propose a model for survivability quantification, which is acceptable for networks carrying complex traffic flows. Complex network traffic is considered as general multi-rate, heterogeneous traffic, where the individual bandwidth demands may aggregate in complex, nonlinear ways. Blocking probability is the chosen measure for survivability analysis. We study an arbitrary topology and some other known topologies for the network. Independent and dependent failure scenarios as well as deterministic and random traffic models are investigated. Finally, we provide survivability evaluation results for different network configurations. The results show that by using about 50% of the link capacity in networks with a relatively high number of links, the blocking probability remains near zero in the case of a limited number of failures. © 2022 Chongqing University of Posts and Telecommunications
Interference is one of the most critical issues in the cellular Device-to-Device (D2D) networks. The sharing of radio resources in cellular and D2D communications offers spectrum efficiency advantages for the cellular network. However, resource sharing also introduces interference, leading to a reduction in the quality of service experienced by users. In this paper, we propose an adversarial Multi-Armed Bandit learning-based transmission Power Control method called MAB-PC for both cellular and D2D transmitters. The objective of this method is to ensure the minimum service quality for users, considering the minimum spectral efficiency and maximum block error rate. Additionally, to address the conflicting objectives of D2D and cellular communications, MAB-PC is modeled as a Pareto optimization problem for D2D transmitters. MAB-PC is a distributed method that minimizes signaling overhead and relies solely on local Channel State Information (CSI). The effectiveness of the proposed method is evaluated based on reliability, total data rate, spectral efficiency, block error rate, and outage probability, demonstrating superior performance compared to its counterpart. © 2023 IEEE.
Peer-to-Peer Networking and Applications (19366450) 15(1)pp. 246-266
In some Vehicular Ad Hoc Networks (VANETs) applications, the geocast routing protocol is used for data transmission from a source vehicle to a group of vehicles located in a common region. Efficient data transmission to the destination region is one of the critical challenges of geocast routing protocols. In this research, the geocast routings are considered that exploit rateless coding to improve the reliability, and so packet delivery ratio. Some of these geocast routing methods use flooding schemes to deliver the messages to the destination region. However, in order to cut high overheads caused by flooding schemes, the routing protocols that use unicast routes for data delivery have been taken into account. In this way, recent geocast routing protocols exploit on-demand unicast routing methods such as Ad-hoc On-Demand Distance Vector (AODV) to deliver the packets to the destination region and then broadcast them in that area. However, the packet delivery ratio and the delay of those methods are respectively lower and higher than flooding-based methods. This paper proposes to exploit the table-driven Optimized Link State Routing (OLSR) protocol to deliver the messages to the destination region. To customize the OLSR protocol for geocasting, we propose a number of modifications to message flows and data exchanges. Compared to on-demand geocast protocols, OLSR imposes lower message delay and delivers more messages to the destination region at a higher overhead expense. To overcome this overhead, we also propose algorithms to adjust the control message intervals of the OLSR protocol in each node. Simulation results show that our OLSR-based protocol demonstrates better performance in terms of delay and packet delivery ratio than those of the traditional AODV-based method and CALAR-DD protocol regarding various vehicles' densities, vehicles' velocities, message sizes, and destination region sizes. Compared to the traditional OLSR, using the tuned OLSR-based method has also significantly reduced the signaling overhead costs. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
IEEE Internet of Things Journal (23274662) 9(11)pp. 8572-8583
With the rise of massive Internet-of-Things (mIoT) applications, the IEEE 802.11ah standard is gaining more attention for implementation in wireless networks. Unbalanced distribution of the traffic load among the access points (APs) is a problem that results in performance degradation of such networks, including packet loss and delay, which are essential for some mission-critical applications. In this article, we present a novel solution for user-AP association in IEEE 802.11ah networks. Assuming that the network is carrying complex traffic, we limit the number of devices associated with each AP to keep the blocking probability under the desired value. Also, two mechanisms are proposed to limit the delay of packets by restricting the number of stations associated with an AP and to determine the transmission range of APs for covering a specific number of stations. Our simulations demonstrate that the proposed method can enhance the throughput and energy efficiency in the network and also limit the blocking probability and delay of packets in the network by restricting the number of IoT devices associated with an AP. As a result, the proposed method facilitates ultrareliable low-latency communications (URLLCs) by limiting the number of associated stations in IEEE 802.11ah networks. © 2014 IEEE.
Computer Communications (1873703X) 194pp. 361-377
Device-to-Device (D2D) communications emerged as a promising technology to improve the efficiency of 5G cellular networks. However, users should be encouraged to participate in content sharing and relaying, which are necessary for D2D communications. Thus, an incentive mechanism is essential to encourage the content owners and relay nodes to participate in D2D communications. In this study, a contract-based incentive mechanism is proposed for relaying D2D communications. In contrast to previous work, this mechanism simultaneously motivates both content owners and relays to participate in D2D communications. Furthermore, user mobility is considered in the proposed approach. Assuming that the devices in the network are mobile, mobility awareness can be effective in the performance of the proposed incentive mechanism since we need more appropriate contracts that are less likely to be violated due to link failures which are the results of the mobility. Therefore, in the proposed mobility-aware incentive mechanism, the selection of the contract is performed according to the predicted location of devices in the next time step, as obtained from Markov method. The simulation results show that the proposed incentive mechanism increases the participation of devices in D2D content sharing compared to the baseline. Also, it is more likely that a content owner earns more utility due to the cooperation of a relay, which leads to an increase in the utility of the base station. Moreover, the increased data transmission rate which is obtained via encouraging relays to participate in D2D communications, reduces the latency and increases the residual energy of the devices. Also, using the proposed mobility-aware incentive mechanism, the utility of BS is improved compared to a similar scenario without mobility awareness. © 2022 Elsevier B.V.
Peer-to-Peer Networking and Applications (19366450) 14(2)pp. 781-793
Internet of Things (IoT) is expected to empower all aspects of the Intelligent Transportation System (ITS), the main goal of which is to improve transportation safety. However, due to high demands by the increasing number of associated vehicles, the allocated bandwidth of ITS is inadequate. Cognitive Radio (CR) technology can be used as a solution for this high demand level. In CR, the pre-allocated spectrum bands are sensed to find the existing holes, caused by the absence of primary users. Cooperative spectrum sensing is an efficient tool for the detection of free spectrum bands that increase the probability of correct detection. In this paper, a distributed cooperative spectrum sensing technique is proposed using the consensus algorithm which is a distributed data aggregation mechanism whereby each vehicle combines the results received from its neighbors’ spectrum sensing. The combined results are repeatedly shared and combined such that all vehicles reach the same results. In vehicular networks, due to the vehicle’s movement, the number of its neighbors changes dynamically. Therefore, considering the vehicle’s mobility is essential in the spectrum sensing process. The consensus algorithm which is a data aggregation method is used to increase the probability of correct detection, and thus to reduce the number of collisions in the spectrum acquisition process. In our method, each vehicle accurately selects a number of its neighbors dynamically, and involves them in the decision-making process. Moreover, separate weights determined based on the entropy of their information are assigned to the sensing results of the selected neighbors. In this way, even if the vehicles are affected by fading or shadowing, they can make more accurate decisions using the sensing results received from other vehicles. The simulation results of the proposed method show that it increases the probability of correctly detecting free spectrum bands as well as convergence speed. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature.
Using generative models to generate unlimited number of synthetic samples is a popular replacement of database sharing. When these models are built using sensitive data, the developers should ensure that the training dataset is appropriately protected. Hence, quantifying the privacy risk of these models is important. In this paper, we focus on evaluating privacy risk of publishing generator in generative adversarial network (GAN) models. Specially, we conduct a white box membership inference attack against GAN models. The proposed attack is applicable to various kinds of GANs. We evaluate our attack accuracy with respect to various model types and training configurations. The results demonstrate superior performance of the proposed attack compared to previous attacks in white box generator access. © 2021 IEEE.
Computer Communications (1873703X) 177pp. 239-254
Vehicle to Vehicle (V2V) communication has recently been considered in 4G and 5G cellular networks. One of the challenging issues in cellular V2V is allocating radio resources to the vehicles. Although previous work have addressed this issue, the fast varying nature of vehicular traffic and its regularities implies that the mobility of the vehicles should be more attended. To this goal, we propose an autonomous geo-based resource selection algorithm that uses deep learning to predict vehicle locations in the future and alleviate the computation and signalling overhead of the cellular infrastructure in contrast to previous geo-based resource allocation algorithms. We utilize the current and the future of vehicle densities in a formulated matching problem to find the optimum assignment of sub-resource pools to geographic areas. Simulation results of a highway with diverse density scenarios and different number of available resources show that the proposed method guarantees a considerable reduction in computation and signalling overhead while in low awareness ranges, it provides up to 10% improvement in Packet Reception Ratio (PRR) and the error rate of vehicles compared to the previous Dynamic Geo-based Resource Selection Algorithm (DGRSA). The proposed method also provides up to 15% improvement in PRR and error rate compared to the modified DGRSA, which we have changed to run with an overhead equal to the overhead of our proposed method. Furthermore, our results demonstrate up to 67% and 76% improvement in blocking rate compared to DGRSA and modified DGRSA, respectively. © 2021 Elsevier B.V.
Wireless Networks (10220038) 27(6)pp. 4009-4037
Preserving patients’ privacy is one of the most important challenges in IoT-based healthcare systems. Although patient privacy has been widely addressed in previous work, there is a lack of a comprehensive end-to-end approach that simultaneously preserves the location and data privacy of patients assuming that system entities are untrusted. Most of the past researches assume that parts of this end-to-end system are trustworthy while privacy may be threatened by insider attacks. In this paper, we propose an end-to-end privacy preserving scheme for the patients assuming that all main entities of the healthcare system (including sensors, gateways, and application providers) are untrusted. The proposed scheme preserves end-to-end privacy against insider threats as well as external attacks concerning the resource restrictions of the sensors. This scheme provides mutual authentication between main entities while preserves patients’ anonymity. Only the allowed users can access the real identity of patients alongside their locations and their healthcare information. Informal security analysis and formal security verification of the proposed protocol in AVISPA show that it is secure against impersonation, replay, modification, and man-in-the-middle attacks. Moreover, performance assessments show that the proposed protocol provides more security services without considerable growth in the computation overhead of the sensors. Also, it is shown that the proposed protocol diminishes the signaling overhead of the sensors and so their energy consumption compared to the literature at the expense of adding a little more signaling overhead to the gateways. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 358-362
In terms of wireless networks and energy consumption, which is the main basis of Internet of Things (IoT) devices, low-power networks (LPWANs) are considered as a suitable solution for IoT applications. The most important LPWAN protocols are SigFox, NB-IoT(NarrowBand-IoT), and LoRa. LoRa is more popular than others because of industry supports such as LoRa Alliance, IBM, Cisco, etc. The large number of devices that communicate with each other in IoT applications has raised concerns about resources availability and the suitable technologies for managing these resources in LoRaWAN networks. There are many applications in the IoT in which the time to receive information from devices (sensors) can be adjusted based on the context information and the internal state of the system in a way that reduces collisions. Therefore, the time of sending information by sensors can be adjusted to some extent, so that sensors have less collisions during transmissions, and thus the problem of scalability of the system is almost solved. In this research, we have presented a framework to improve the performance of LoRaWAN network, in which the devices are scheduled according to their QoS requirements, the density of the network, and the context information. The performance of the proposed model is evaluated by simulations in which the sensors send packets to the server based on the proposed scheduling. Evaluations show that the proposed method has reduced congestion by 51% and energy consumption by 52% on average compared to the baseline solution. © 2021 IEEE.
Telecommunication Systems (10184864) 78(2)pp. 169-185
Growing demand for the Internet of Things (IoT) applications including smart cities, healthcare systems, smart grids, and transportation systems, has enhanced the popularity of Machine-Type Communication (MTC) in 5G and 6G cellular networks significantly. Massive access is a well-known challenge in MTC that should be efficiently managed. In this paper, a grant-based massive access mechanism is introduced where time-frames are separated into two distinct parts; one for contention-based resource granting and the other for scheduled data transmission. In the contention period, we propose a novel random access mechanism where the nodes are grouped based on their distances from the Base Station (BS), and the access probability of each group member is adjusted through solving an optimization problem. In the proposed mechanism, energy efficiency, spectrum efficiency and access delay are formulated in terms of the access probability of devices using p-persistent CSMA method. Thereafter, some optimization problems are formulated to improve the energy/spectrum efficiency and access delay by adjusting the access probability of devices while regarding their delay requirements. The simulation results indicate that the proposed method is better than the previous ones considering energy efficiency, access delay, bandwidth efficiency, and scalability. Also, in the proposed method, delay-sensitive nodes experience lower access delay. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Physical Communication (18744907) 46
The growth of mobile devices and their traffic volume make it necessary for mobile network operators to provide higher capacity. As traditional macrocell-based cellular networks could not support a massive number of mobile users, Heterogeneous Networks (HetNets) are welcomed as a coexistence of macro-cells and small-cells in 5G mobile networks. The main challenge of 5G HetNets is the management of co-tier and cross-tier interference. Fractional Frequency Reuse (FFR) is a suitable technique to mitigate the interference along with improving spectrum efficiency. On the other side, energy consumption has attracted much attention in 5G green cellular networks. Efficient power allocation and cell zooming/cell switch-off techniques are of the most important research directions in this area. However, the impact of cell zooming/switch-off on FFR-based HetNets has not been attended adequately. In this paper, a solution is presented which exploits dynamic adjustment of cell-center radius and efficient power allocation in FFR-based HetNets when conducting a cell switch-off/cell zooming. A heuristic algorithm is presented for cell-center radius adjustment problem. Also, for efficient power allocation, a multi-objective optimization problem is formulated and solved using NSGA-II algorithm Simulation results show that the proposed method reduces energy consumption compared to the baseline method and diminishes the overall interference in the system. © 2021 Elsevier B.V.
Multimedia Systems (14321882) 26(2)pp. 173-190
As the VoIP steganographic methods provide a low capacity covert channel for data transmission, an efficient and real-time data transmission protocol over this channel is required which provides reliability with minimum bandwidth usage. This paper proposes a micro-protocol for data embedding over covert storage channels or covert hybrid channels developed by steganographic methods where real-time transport protocol (RTP) is their underlying protocol. This micro-protocol applies an improved Go-Back-N mechanism which exploits some RTP header fields and error correction codes to retain maximum covert channel capacity while providing reliability. The bandwidth usage and the performance of the proposed micro-protocol are analyzed. The analyses indicate that the performance depends on the network conditions, the underlying steganographic method, the error correction code and the adjustable parameters of the micro-protocol. Therefore, a genetic algorithm is devised to obtain the optimal values of the adjustable micro-protocol parameters. The impact of network conditions, the underlying steganographic method and the error correction code on the performance are assessed through simulations. The performance of this micro-protocol is compared to an existing method named ReLACK where this micro-protocol outperforms its counterpart. © 2019, Springer-Verlag GmbH Germany, part of Springer Nature.
EAI/Springer Innovations in Communication and Computing (25228595) pp. 29-45
The scarcity of radio-frequency bands, due to its fixed allocation, is an emerging problem in wireless communications. Cognitive radio (CR) is a new paradigm which suggests reusing the frequency bands for unlicensed user at the time of licensed users’ inactivity. Therefore, unlicensed users must perform spectrum sensing to find the available spectrum opportunities. Cooperative spectrum sensing (CSS) is a method where unlicensed users individually sense and upload their sensing data to a fusion center. Moreover, crowd-sensing methods could be used by mobile users to provide more sensing data from various locations for the sake of improving the achieved decisions on spectrum status. Providing spectrum data from various sources makes the spectrum monitoring and management more complex. This chapter proposes a novel mechanism that (1) uses cloud computing technology as a well-suited platform for storing, processing such big data, and providing monitoring service in order to be used by, e.g., CR nodes; (2) considers the impact of contextual parameters such as location, time, building complexity around the user, etc. on the spectrum availability decision; and (3) takes the advantages of machine learning techniques to predict the future behavior of spectrum opportunities. © Springer International Publishing AG, part of Springer Nature 2019.
Transactions on Emerging Telecommunications Technologies (21613915) 30(10)
Machine-to-machine (M2M) communication is a challenging topic in the Internet-of-Things era. The increasing growth of machines and high rate of packet production have result in massive access to the network. Therefore, one of the media access control (MAC) challenges in 5G cellular networks is the management of massive access to the wireless media regarding quality-of-service (QoS) requirements and battery restrictions of the machines. Delay is one of the QoS requirements that should be guaranteed in most applications. To the best of our knowledge, previous studies on scheduled MAC mechanisms have not addressed energy efficiency and delay requirements appropriately. In this paper, the problem of scheduled massive access management has been considered with the aim of simultaneously meeting delay requirements of machines and increasing energy efficiency. The proposed solution exploits gateways to reuse the spectrum dedicated to M2M communications and introduces a clustering algorithm to allocate radio resources to the machines appropriately. Moreover, an optimization problem is defined and solved using genetic algorithms (GA) to adjust the optimal transmission power of machines, aiming at maximizing the energy efficiency. To preserve delay constraints, the proposed method exploits an existing scheduling algorithm. Simulation results demonstrate more than 50% reduction in probability of packet delay violation for various classes of service and in mean packet delay compared to the baseline method. The results have also shown more than 20% improvement in energy efficiency of proposed method compared to the baseline. © 2019 John Wiley & Sons, Ltd.
Shahgholi, B. ,
Karchegani, M.M. ,
Karchegani, M.M. ,
Karchegani, M.M. ,
Shahgholi, B. ,
Shahgholi, B. 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 1289-1293
Special characteristics of Machine-Type Communications (MTC) result in new challenges in 5G mobile networks including Medium Access Control (MAC) mechanisms. The traditional methods for contention-based MAC are unable to manage a massive number of simultaneous requests. On another side, recent methods for MTCs have not adequately attended to energy consumption of machines along with QoS. In this paper, a method for contention-based massive access protocol is presented based on p-persistent CSMA which considers both energy and delay requirement of machines, a multi-objective optimization problem is solved by BS at the beginning of each time frame in order to find the optimum value of transmission probability based on the number of machines and their distances from the BS. The optimization problem aims at achieving maximum energy efficiency and minimum access delay. To avoid the overhead of solving the optimization problem, the value of probability is temporarily updated by the machine at the end of each time slot, based on its delay requirement and the number of its recent access failures. The simulation results show a considerable increase in the number of successful transmissions, a decrease in access time and improvements in throughput and energy consumption. © 2019 IEEE.
Information Security Journal (19393555) 28(4-5)pp. 107-119
Demands for the deployment of large-scale Low power and Lossy Networks (LLN) are growing these days. Internet of Things (IoT) refers to networks that communicate over Internet Protocols (IPs). Routing Protocol for Low power and Lossy Networks (RPL) is a standard routing protocol for networks which has been exposed to a variety of attacks. Previous works have addressed those attacks; however, each study has focused on a single attack. In this work, we propose the hybrid of Sinkhole and CloneID attacks named “Sink-Clone” attack. The paper demonstrates the effect of this hybrid attack on RPL performance and evaluates it against a detection method which is inspired by previous detection methods. The hybrid attack and the Intrusion Detection System (IDS) have been simulated in Cooja simulator where the results show the impacts of the hybrid attack compared to the standalone attacks as well as the capability of the proposed IDS in detecting malicious nodes in a hybrid scenario. © 2019, © 2019 Taylor & Francis Group, LLC.
Physical Communication (18744907) 37
Energy consumption of cellular networks has gained significant attention in recent years as green cellular networking paradigm appeared. Base Station (BS) switch-off is a solution for minimizing energy consumption of green cellular networks. However, User Equipments (UEs) of switched-off BSs lose their connectivity to the network and should be re-associated with BSs, attain new radio resources, and re-adjust their transmission powers. Previous studies have either not addressed this issue or have considered the joint problem of BS switch-off and BS association/resource allocation, while those joint problems are not spectrum efficient. In this paper, we formulate two different optimization problems for spectrum efficient re-association of involved UEs and re-allocation of their uplink radio/power resources after an energy-efficient BS switch-off. The first optimization problem aims at maximizing the achievable rate of cell switched-off UEs, while the second problem intends to maximize the achievable rate of all UEs of the network when a cell switch-off occurs. In formulated problems, the required rate of UEs is regarded as the QoS constraint. Decoupling the problems into sub-problems, we propose a distributed solution for user re-association, channel re-allocation, and power adjustment of UEs. Simulation results show superior performance of the proposed methods compared to the baseline methods in terms of the network capacity and QoS guarantee. © 2019 Elsevier B.V.
Computer Networks (13891286) 146pp. 47-64
The rapid growth of cellular traffic which is mostly due to increasing demands for multimedia and social network applications, has led to spectrum deficiency in cellular networks. Device to Device (D2D) communication is an excellent technology for offloading cellular traffic via direct communication between user equipment reusing cellular radio resources or unlicensed bands. Moreover, using D2D communication, there is a chance of reducing the energy consumption of user equipment. Multicasting is a known scenario of D2D communication which is suitable for some content sharing scenarios such as public safety services and TV broadcasting in ultra-dense networks. Efficient resource management is a challenging problem of D2D multicasting which has not been addressed adequately. This paper proposes an energy and spectrum efficient resource management method for D2D multicast communications which takes user requirements into account. In contrast to previous methods, this work also considers the mobility of user equipment in terms of the stability of possible D2D links using a mobility correlation measure. The multicast group formation and channel/power allocations have been formulated as a multi-objective optimization technique and NSGA-II has been exploited to solve the problem. Simulation results demonstrate the superior performance of the proposed method compared to previous methods in terms of energy and spectrum efficiency. © 2018 Elsevier B.V.
Journal of Information Security and Applications (22142126) 42pp. 29-35
Cooperative spectrum sensing is one of the solutions for cognitive radio networks, which can resolve the uncertainty of stand-alone spectrum sensing. It means that each secondary user senses defined spectrum bands for the unused spectrum detection and shares its sensing results with the others to improve the accuracy of spectrum sensing. Due to the presence of malicious secondary users and their fake sensing reports, various forms of attacks will be encountered that reduce the performance of cooperative spectrum sensing. Spectrum sensing data falsification attack (SSDF) is one of the attacks that some research works have been presented to defend against it, based on trust and reputation management (TRM). Previous works assume that all the secondary users are in the transmission range of each other and one-hop sensing reports are provided. However, multi-hop dissemination of sensing reports is a necessary scenario for the secondary users with limited energy sources and its security attacks are very challenging to be encountered. In this paper, a trust-based multi-hop cooperative spectrum sensing method is proposed to deal with SSDF attack. Simulation results show that this scheme improves spectrum sensing accuracy and reduces. © 2018 Elsevier Ltd
Telecommunication Systems (10184864) 64(2)pp. 367-390
Designing QoS-aware medium access control (MAC) scheme is a challenging issue in vehicular ad hoc networks. Proportional fairness and bandwidth utilization are among the significant requirements that should be taken into account by a MAC scheme. In this paper, a bandwidth-efficient and fair multichannel MAC protocol is proposed to address these two requirements, specifically in vehicle-to-vehicle communications. The proposed scheme is based on clustering of vehicles and exploits time division multiple access (TDMA) method alongside the carrier sense multiple access with collision avoidance mechanism to allocate DSRC-based resources in a different manner from IEEE 802.11p/IEEE 1609.4 protocols. It divides each channel into aligned dynamic-sized time frames. In each time frame, in a fully TDMA-based period, transmission opportunities are assigned to vehicles letting them have dedicated transmissions on the service and control channels. The maximum number of transmission opportunities per each frame is determined by the cluster head (CH) based on a defined optimization problem which aims at maximizing both proportional fairness and bandwidth utilization. Furthermore, the bandwidth utilization is assumed to be enhanced more through reallocation of unused transmission opportunities in each time frame, using a proposed reallocation algorithm. The proposed MAC protocol is treated as a lightweight scheme such that various types of unicast, multicast and broadcast communications are possible within the cluster without involving the CH. Evaluation results show that the proposed scheme has more than 90 % achievement in terms of proportional fairness and bandwidth utilization simultaneously, and in this case, has a considerable superiority over TC-MAC. In addition, using the proposed scheme, the satisfaction level of vehicles is preserved appropriately. © 2016, Springer Science+Business Media New York.
Growing demands for mobile services in recent years has extraordinary increased the number of cellular operators and consequently the portion of energy that is consumed by cellular networks. Hence, green cellular network has been introduced, which suggests the use of energy optimization methods in cellular networks. One of the solutions to optimize the energy consumption of cellular networks is to switch on/off base stations with low traffic demand. Previous works on cell switching usually do not address the spectrum efficiency and QoS of users in their methods. In this paper, a cell switching method is proposed on the basis of a baseline method which considers the spectrum efficiency and QoS of users in reallocation of radio resources after switching off a base station. The simulation results demonstrate that the proposed method reduces the energy consumption and improves spectrum efficiency compared to baseline method. © 2017 IEEE.
Annales des Telecommunications/Annals of Telecommunications (00034347) 72(11-12)pp. 639-651
Deploying heterogeneous networks (HetNets) and especially femtocell technology improves indoor cell coverage and network capacity. However, since users install femtocells which usually reuse the same frequency band as macrocells, interference management is considered a main challenge. Recently, fractional frequency reuse (FFR) has been considered as a way to mitigate the interference in traditional as well as heterogeneous cellular networks. In conventional FFR methods, radio resources are allocated to macrocell/femtocell users only according to their region of presence ignoring the density of users in defined areas inside a cell. However, regarding the unpredictability of cellular traffic, especially on the femtocell level, smart methods are needed to allocate radio resources to the femtocells not only based on FFR rules, but also traffic load. In order to solve this problem, new distributed resource allocation methods are proposed which are based on learning automata (LA) and consider two levels of resource granularity (subband and mini-subband). Using the proposed methods, femto access points learn to choose appropriate subband and mini-subbands autonomously, regarding their resource requirements and the feedback of their users. The goal of the proposed methods is reduction of interference and improvement of spectral efficiency. Simulation results demonstrate higher spectral efficiency and lower outage probability compared to traditional methods in both fixed and dynamic network environments. © 2017, Institut Mines-Télécom and Springer-Verlag France SAS.
Turkish Journal Of Electrical Engineering And Computer Sciences (13000632) 25(3)pp. 1976-1992
Recent evolutions in mobile networks have led to increased resource demands, especially from indoor users. Although recent technologies such as LTE have an important role in providing higher capacity, indoor users are not satisfied adequately. Femtocell networks are one of the proposed solutions that support high data rates as well as better indoor coverage without imposing heavy costs to network providers. However, interference management is a challenging issue in femtocell networks, mainly due to dense and random deployment of femto access points (FAPs). Therefore, distinct radio resource management (RRM) methods are employed to ensure acceptable levels of call dropping/blocking probability and spectral efficiency. However, the mobility of mobile users is an important issue in resource management of femtocell networks that has not been considered adequately. In this paper, we propose an algorithm that predicts the resource requirements of FAPs regarding mobility of their users and allocates the resources to the FAPs based on an extended load-based RRM algorithm that prioritizes handoff calls to incoming calls. Simulation results illustrate that the proposed method has shown lower call dropping probability and higher spectral efficiency compared to the benchmark algorithms. © TÜBÏTAK.
Multimedia Tools and Applications (13807501) 76(22)pp. 23239-23271
Channel zapping delay is a big challenge in delivering TV service over the Internet infrastructure. Previous research works have studied this delay, its components, and solutions to decrease it. Unfortunately, the best proposed solutions reduce the delay at the expense of increasing bandwidth usage or decreasing the received video quality. After channel switching, the Set Top Box (STB) or player application should buffer sufficient frames before starting to play the received video. However, the buffering process takes place at the playback rate and leads to a delay which is inversely related to the buffer duration. Regarding Information Centric Networking (ICN) paradigm, this paper introduces a new channel zapping protocol that aims to remove the synchronization and buffering delays while maintaining the bandwidth utilization and also the received video quality. The general idea of the proposed solution is to exploit the in-network caching feature of the ICN to retrieve the frames from the network at the network speed. Although the analyses show that the proposed zapping protocol eliminates the delay dependency to the buffer duration, network throughput becomes the bottleneck instead. So, novel solutions have been proposed to reduce the queuing delay as the main component of network delay. These solutions include two new caching algorithms, a new cache replacement algorithm, and applying scheduling methods to the forwarding queues. Simulation results show that increasing link rates, using the proposed caching and cache replacement algorithms, and applying an appropriate scheduling method will greatly reduce the zapping delay without sacrificing the bandwidth or video quality. © 2016, Springer Science+Business Media New York.
Computers and Electrical Engineering (00457906) 64pp. 450-472
Currently, Vehicular Ad-hoc Networks (VANETs) are attracting a lot of attention due to their favorable applications. VANETs are the key to providing safety and efficiency on the roads. The vehicles can communicate with other vehicles to inform the ongoing status of the traffic flow or critical situations like accidents. However, this would entail a reliable and efficient Medium Access Control (MAC) protocol. Due to the high speed of the nodes, the frequent changes in network topology, and particularly the lack of an infrastructure, the design of the MAC for vehicular communications turns into a more challenging task. A lot of research works has been conducted to overwhelm the vehicular MAC problems regarding Quality of Service (QoS) requirements of both safety and non-safety applications covering both Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V) communications. Recently, a significant number of MAC schemes has been proposed for V2V communications. In this paper, for future studies to be more effective, the outstanding proposed V2V MAC schemes are intended to come under review. Moreover, V2V MAC design approaches are discussed and a qualitative comparison is provided. A novel classification of V2V MAC schemes is then presented, and the characteristics of these schemes along with their strengths and weaknesses are studied. Finally, a comparative summary is given and some open challenges regarding the design of V2V MAC schemes are discussed. © 2017 Elsevier Ltd
2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 60-65
In these days, location based services are being more popular than ever and can be seen in ways like simple reservation systems to more complex commercial applications with their customized e-commerce logic. All of these applications are based on sending users locations and other confidential data over untrusted channels which can expose such private data to suicide hackers eavesdropping communication channel. One of these applications that this paper tries to cover is location based mobile coupons. In this service, users receive mobile coupons based on their location information from nearby stores. Sending location information to service providers without any consideration can disclose user location to malignant users or even service provides can use these information to track the user. In this paper we use anonymous authentication to preserving location privacy using blind signature. The unforgeability and unlinkability features of proposed method avoid coupon frauds and location tracking together. © 2016 IEEE.
Wireless Personal Communications (1572834X) 83(1)pp. 281-295
In addition to safety applications, new Vehicular Ad-Hoc Network (VANET) applications such as onboard entertainment and traffic management are rapidly being developed. This has made geocast, i.e. the transmission of data over a geographic area, an important research topic. Often geocast is done over wide and distant areas, which can result in significant overhead. In this paper, existing VANET routing protocols are investigated from a geocast perspective. To address the shortcomings of these methods in addressing both overhead and packet delivery ratio issues in geocast routing, two enhanced mechanisms are introduced based on the AODV routing protocol. Unicast routing is employed in the proposed protocols to transmit data to the destination region, and then flooding is used within the region for data dissemination. Furthermore, rateless coding and link layer notifications are used to improve the delivery ratio. The proposed methods are compared with existing flooding based geocast routing and Inter Vehicle Geocast (IVG) routing. Results are presented for an urban environment in terms of delay, overhead, and message delivery. These results show that the proposed approach significantly reduces the overhead and increases the delivery ratio with a minimal increase in delay. © 2015, Springer Science+Business Media New York.
Computers and Electrical Engineering (00457906) 44pp. 218-240
Mobile Cloud Computing (MCC) augments capabilities of mobile devices by offloading applications to cloud. Resource allocation is one of the most challenging issues in MCC which is investigated in this paper considering neighboring mobile devices as service providers. The objective of the resource allocation is to select service providers minimizing the completion time of the offloading along maximizing lifetime of mobile devices satisfying deadline constraint. The paper proposes a two-stage approach to solve the problem: first, Non-dominated Sorting Genetic Algorithm II (NSGA-II) is applied to obtain the Pareto solution set; second, entropy weight and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method are employed to specify the best compromise solution. Furthermore, a context-aware offloading middleware is developed to collect contextual information and handle offloading process. Moreover, to stimulate selfish users, a virtual credit based incentive mechanism is exploited in offloading decision. The experimental results demonstrate the ability of the proposed resource allocation approach to manage the trade-off between time and energy comparing to traditional algorithms. © 2015 Elsevier Ltd. All rights reserved.
Wireless Networks (10220038) 21(1)pp. 21-34
One of the challenging problems with deployment of IEEE 802.11WLANs in the same hotspot is assignment of appropriate channels to the Access Points (APs). As the number of channels in IEEE 802.11 is limited and most of them are partially overlapping, proper reuse of such channels is a complex optimization problem with respect to the traffic load and Quality of Service (QoS) requirements. Previous methods mostly employ the estimated number of interfering clients or the interference level that is measured by the AP as the main decision parameter, without regarding the actual interference imposed to the clients and their QoS requirements. Quality of Experience (QoE) is defined as the overall acceptability of the service as perceived by the user and can be exploited as a new metric which not only reflects the impairments (such as interference) imposed to the traffic, but also represents the user/service requirements. In this paper, a novel performance index, which takes into account both the aggregate QoE and the user-level fairness, is defined and channel assignment is formulated as the optimization problem on maximizing this index. Two novel distributed channel assignment algorithms are presented that exploit the QoE measure of associated clients to locally solve the optimization problem using Learning Automata mechanism. The proposed methods have been analyzed and compared to the famous Least Congested Channel Scan method by simulations where the results have shown superior performance in term of defined performance index. © 2014, Springer Science+Business Media New York.
In this paper, a novel Inter-Cell Interference Coordination (ICIC) scheme is proposed to mitigate the downlink inter-cell interference (ICI) in Long Term Evolution (LTE) networks. The proposed method partitions the cells into 3 regions and uses predefined resource allocation schemes to assign the resources to each cell of the cluster. The proposed method also introduces an interference balancing method between neighboring cells based on an inter-cell coordination mechanism. Simulation results show that the proposed scheme is better than compared schemes. The results also show that the scheme can greatly improve the spectral efficiency of system and users although it reduces the user level fairness. © 2015 IEEE.
Shahgholi, B. ,
Eftekhari, S. ,
Moghim, N. ,
Eftekhari, S. ,
Eftekhari, S. ,
Shahgholi, B. ,
Shahgholi, B. ,
Moghim, N. ,
Moghim, N. 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025
The tremendous growth of the Internet traffic and the rapid changes that have occurred in the way people using it to access massive contents, instruct the research community to data oriented networks. Over the last few years, various data oriented network architectures have emerged to fulfill the demand for a more scalable content distribution. Content-Centric Networking (CCN) is an architecture that has attracted a good consideration. A significant portion of Internet traffic growth relates to diverse types of multimedia applications, including live TV. In fact, delivering television services over the existing Internet protocol (IPTV) has been commercialized for more than a decade. Emigrating to the new network generation requires well analysis of the current applications. Nevertheless, CCN still suffers from the shortage of suitable simulators. We have developed a new modular, component based CCN simulator specifically optimized for live TV modeling. We have published it as an open source utility. In this paper, we present our simulator and evaluate its capabilities on the simulation of live video streaming over CCN. © 2015 IEEE.
Communications in Computer and Information Science (18650937) 428pp. 145-154
Due to resource scarcity of mobile devices in Mobile Cloud Computing (MCC), intensive computing applications are offloaded into the cloud. There is a three-tier architecture for MCC consisting of distant cloud servers, nearby cloudlets and adjacent mobile devices. In this paper, we consider third tier. We propose an Optimal Fair Multi-criteria Resource Allocation (OFMRA) algorithm that minimizes the completion time of offloading applications along maximizing lifetime of mobile devices. Furthermore to stimulate selfish devices to participate in offloading, a virtual price based incentive mechanism is presented. The paper also designs an Offloading Mobile Cloud Framework (OMCF) which collects profile information and handles the offloading process. A prototype of the proposed method has been implemented and evaluated using a high computational load application. The results show that the proposed algorithm manages the tradeoff between optimizing completion time and energy well and improves the performance of offloading using the incentive mechanism. © Springer International Publishing Switzerland 2014.
Shahgholi, B. ,
Torabi, N. ,
Torabi, N. ,
Torabi, N. ,
Shahgholi, B. ,
Shahgholi, B. 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 747-752
Wireless access in vehicular environment brings new challenges. One of the most common issues is about designing a channel access scheme for Vehicle-to-Vehicle (V2V) communications, which is affected by several number of factors such as high density of vehicles, direction of traveling, location and speed of vehicles, instability of communication links, and absence of fixed infrastructures. The IEEE 802.11p/1609.4 WAVE protocols aim to cope with these issues through adopting a contention-based approach. However, various simulation experiments vote the weakness of the IEEE 802.11p/1609.4 protocols to provide scalability, reliability, predictable delay and fairness. Meanwhile, contention-free methods have been offered by researchers to address these requirements. However, there are only a few number of contention-free schemes for VANETs in the literature, which consider fairness. In this paper, a Time Division Multiple Access (TDMA) based multichannel assignment scheme is proposed to address the fairness issue in sharing wireless medium. The proposed method is an extension of TC-MAC and apart from improving fairness; it tries to preserve the bandwidth utilization via an enhanced slot reservation mechanism. © 2014 IEEE.
Shahgholi, B. ,
Torabi, N. ,
Torabi, N. ,
Torabi, N. ,
Shahgholi, B. ,
Shahgholi, B. 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 519-524
Due to the high costs of deploying and testing VANETs, simulations are required for development and evaluation of new protocols at any layer of WAVE protocol stack. Although there are powerful tools to simulate VANETs and especially the IEEE 802.11p/1609.4 DSRC/WAVE protocols, however, they are not free or not publically available. NS-2 Network Simulator is a free and widely accepted simulator used by researchers to simulate DSRC/WAVE. To the best of our knowledge, current version of NS-2 and developed simulation tools based on NS-2 do not appropriately support for the multichannel operation of the IEEE 802.11p/1609.4. In this paper, a new developed simulation tool based on NS-2 is introduced that provides a more realistic implementation of DSRC multiple channels and IEEE 1609.4 multichannel operation. The developed simulation tool has been tested through some simulation experiments. © 2014 IEEE.
Wireless Personal Communications (1572834X) 77(3)pp. 2341-2358
Poor indoor coverage and high cost of cellular network operators are among the main motivations for the employment of femtocell networks. Since femto access points (FAPs) and macrocells share same spectrum resources, radio resource allocation is an important challenge in OFDMA femtocell networks. Mitigating interference and improving fairness among FAPs are the main objectives in previous resource allocation methods. However, the main drawback is that user level fairness has not been adequately addressed in the previous methods, and moreover, most of them suffer from inefficient utilization of radio resources. In this paper, modeling the problem as a graph multi-coloring, a centralized algorithm is proposed to obtain both user level fairness and spectrum efficiency. This method employs a priority-based greedy coloring algorithm in order to increase the reuse factor and consequently the spectrum efficiency. Moreover, in situations where the number of available OFDM resources is not sufficient, the proposed method employs a novel fairness index to fairly share those remaining resources among users of FAPs. The performance comparison between the proposed and previous methods shows that the proposed method improves the balance between user-level fairness and resource utilization. In addition, the presented analyses show that the time complexity of the proposed method is less than that of conventional methods. © 2014 Springer Science+Business Media New York.
2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 1071-1076
Improving cell coverage and network capacity are main issues in LTE networks. By the emergence of heterogeneous cellular networks with different cell size, femtocells have been regarded as a low cost solution to improve poor indoor coverage for home users. However, as Femto Access Points (FAPs) are installed by users, self-organized techniques are needed for allocation of radio resources to femtocells. On the other hand, Fractional Frequency reuse (FFR) has been considered to improve spectral efficiency and quality of edge users in heterogeneous networks (HetNets). In conventional FFR methods, the macrocell area is partitioned into some regions and certain fractions of radio resources are considered for macrocell!femtocell users in each region. Therefore, radio resources are allocated to femtocell!macrocell users based on their region of presence without addressing the density of users in that region and consequently the interference level. In this paper, a new self-organized fractional resource allocation method is proposed for femtocells. The proposed method is based on Learning Automata where FAPs learn to choose the best fraction based on the feedback of femtocell users. Simulation results confirm that the proposed radio resource allocation method improves spectral efficiency and decreases the outage probability compared to conventional Strict FFR method. © 2014 IEEE.
Wireless Networks (10220038) 19(8)pp. 1807-1828
Recent developments in heterogeneous mobile networks and growing demands for variety of real-time and multimedia applications have emphasized the necessity of more intelligent handover decisions. Addressing the context knowledge of mobile devices, users, applications, and networks is the subject of context-aware handoff decision as a recent effort to this aim. However, user perception has not been attended adequately in the area of context-aware handover decision making. Mobile users may have different judgments about the Quality of Service (QoS) depending on their environmental conditions, and personal and psychological characteristics. This reality has been exploited in this paper to introduce a personalized user-centric handoff decision method to decide about the time and target of handover based on User Perceived Quality (UPQ) feedbacks. The UPQ degradations are mainly for the sake of (1) exiting the coverage of the serving Point of Attachment (PoA) or (2) QoS degradation of serving access network. Using UPQ metric, the proposed method obviates the necessity of being aware about rapidly varying network QoS parameters and overcomes the complexity and overhead of gathering and managing some other context information. Moreover, considering the underlying network and geographical map, the proposed method is able to inherently exploit the trajectory information of mobile users for handover decision. UPQ degradation is not only due to the user behaviour, but also due to the behaviours of others users. As such, multi-agent reinforcement learning paradigm has been considered for target PoA selection. The employed decision algorithm is based on WoLF-PHC learning method where UPQ is used as a delayed reward for training. The proposed handoff decision has been implemented under IEEE 802.21 framework using NS2 network simulator. The results have shown better performance of the proposed method comparing to conventional methods assuming regular movement of mobile users. © 2013 Springer Science+Business Media New York.
Shahgholi, B. ,
Kazeminia, A. ,
Kazeminia, A. ,
Kazeminia, A. ,
Shahgholi, B. ,
Shahgholi, B. 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 202-206
Mobile technologies have created unprecedented opportunities for innovative marketing strategies. Location-based mobile coupons are at the cutting edge of these strategies. They provide a method to offer deals to mobile customers based on where they are at a certain time. However, revealing one's location in exchange for a service poses privacy threats ranging from discovery of personal details to being watched or tracked. This paper proposes a novel privacy protection mechanism for location-based coupons using an anonymous authentication method. The proposed mechanism protects against two adversaries: a global adversary that observes all the exchanged location information and an eavesdropper that listens to the communications on the wireless channel. It ensures user privacy and anonymity and prevents users from being tracked using self-generated pseudonyms. Furthermore, restriction applied on pseudonyms avoids all coupons to be grabbed by a greedy user. © 2013 IEEE.
Computer Communications (1873703X) 36(10-11)pp. 1101-1119
Integration of various wireless access technologies is one of the major concerns in recent wireless systems in which multi-technology mobile devices are provided to users to roam between different access networks. Being an essential part in heterogeneous wireless systems, vertical handover is more complex than conventional horizontal handover. As IEEE 802.21 Media Independent Handover (MIH) is the standard addressing a uniform and media-independent framework for seamless handover between different access technologies, many works have been carried out in the literature to employ MIH services in handover management This paper presents a comprehensive survey of the proposed mobility management mechanisms that are using this framework. As a comparative view, the paper categorizes the efforts according to the layer of mobility management and evaluates some of the representative methods discussing about their advantages and disadvantages The paper also looks into recent handover decision and interface management methods that are exploiting MIH Moreover, the extensions and the amendments proposed on MIH are overviewed. © 2013 Elsevier B.V. All rights reserved.
Wireless Personal Communications (1572834X) 68(4)pp. 1633-1671
Recent developments in heterogeneous mobile networks emphasize the necessity of more intelligent and context-aware handover decisions. However, the complexity and overhead of collecting and managing context information are the main difficulties in context-aware handovers. Media independent handover (MIH) framework which has been proposed by IEEE 802.21 only provides static context of access networks through its information service. This paper elaborates the idea of handoff-aware network context gathering for renewal of dynamic context in MIH information server. An extension is proposed on MIH framework to efficiently accommodate the dynamic context of access networks along with the ordinary static context in IS. The paper presents analytical evaluation of the proposed context gathering method in terms of context access latency and signalling overhead. Also, the paper presents a policy-based context-aware handover model based on the proposed extension. A well defined policy format is proposed for straight description of users', devices', and applications' preferences and requirements. In contrast to traditional policy-based methods, a multi-policy scheme is proposed that exploits rank aggregation methods to employ a set of matching policies in target point of attachment selection. Simulations have been carried out in NS2 to verify the performance of the proposed context gathering method and the proposed handover decision model. Simulation results show better performance in terms of evaluation metrics. © 2012 Springer Science+Business Media, LLC.
Turkish Journal Of Electrical Engineering And Computer Sciences (13000632) 20(6)pp. 914-933
Context-aware handover decision has recently been considered as a candidate for next-generation heterogeneous wireless networks. The context-aware handover methods proposed in the literature differ in some aspects, including the location of the handover decision (distributed or centralized). Depending on the location of the decision point, the appropriate part of the context knowledge should be transferred in those methods. This paper proposes a context gathering mechanism for a policy-based context-aware handover method, which implements mobile-initiated and network-assisted handover. The proposed network context gathering mechanism is based on a media-independent handover (MIH) framework and the paper justifies the usability of the extension using some analysis on signaling overhead and latency. Another part of the context is the preferences of the users, applications, and network operators, where this paper has proposed an automatic policy construction procedure to gather and employ them in generating policies. This procedure eliminates the complexity of making policies in previous policy-based context-aware methods and allows employing up-to-date network context information to dynamically modify the policies. Simulation results show better performance in terms of perceived quality for sensitive traffic. © TÜBITAK.
Wireless Communications and Mobile Computing (discontinued) (15308677) 11(6)pp. 723-741
Growing demands for pervasive and ubiquitous services over wireless mobile networks and evolution of such networks towards heterogeneous solutions have emphasized the necessity of more intelligent handoff decisions. The existing handoff management methods in the literature are mostly using signal strength measurements and other link quality evaluations not addressing the knowledge about context of mobile devices, users and networks. Recently, context-aware handoff management has been considered as a novel candidate for fourth generation (4G) wireless technology. In this paper, user perceived quality of service has been considered in addition to traditional contexts such as user preferences, application requirements, network parameters and link quality for decision making. User perceived quality (UPQ) has been employed as a trigger source, in addition to link layer triggers which are emerged using media independent handover (MIH) event service. This paper presents a policy based mechanism for handoff decision making where fuzzy petri nets (FPNs) have been utilized as its evaluation algorithm. A case study has been provided by simulations to show the usability and user level satisfaction. Simulation results show superior performance in terms of UPQ, jitter and packet delivery measures. Copyright © 2009 John Wiley & Sons, Ltd.
Shahgholi, B. ,
Hasani H.R. ,
Movahhedinia, N. ,
Hasani H.R. ,
Hasani H.R. ,
Movahhedinia, N. ,
Movahhedinia, N. ,
Shahgholi, B. ,
Shahgholi, B. 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 513-517
User judgment about the quality is an important aspect in quality evaluation. Current approach to evaluate quality of service is based on measuring it from network related parameters such as delay, jitter, and loss or from objective estimations by evaluating degradation of speech. However, the user's view is more meaningful than network view or objective estimation, since the end user is the final judge for evaluation of quality. Therefore involving user's real opinion is an appropriate parameter to improve the performance of multimedia communication networks. Evaluation of user satisfaction depends on user conditions and is varying from user to user. This research attempts to evaluate voice quality from user's feedbacks which are calculated from his/her vocal behaviour. The idea behind this novelty is that quality degradations affect the user's mental state including user emotion and CL where such effects are assessable from user speech. © 2011 IEEE.
MICROELECTRONICS JOURNAL (00262692) 42(5)pp. 701-708
Quantum Cellular Automata (QCA) is a novel and attractive method which enables designing and implementing high-performance and low-power consumption digital circuits at nano-scale. Since memory is one of the most applicable basic units in digital circuits, having a fast and optimized QCA-based memory cell is remarkable. Although there are some QCA structures for a memory cell in the literature, however, QCA characteristics may be used in designing a more optimized memory cell than blindly modeling CMOS logics in QCA. In this paper, two improved structures have been proposed for a loop-based Random Access Memory (RAM) cell. In the proposed methods, the inherent capabilities of QCA, such as the programmability of majority gate and the clocking mechanism have been considered. The first proposed method enjoys smaller number of cells and the wasted area has been reduced compared to traditional loop-based RAM cell. For the second proposed method, the memory access time has been duplicated in presence of smaller number of cells. Irregular placement of QCA cells in a QCA layout makes its realization troublesome. So, we have proposed alternative versions of the proposed methods that exploit regularity of clock zones in design and have compared them to each other. QCA designer has been employed for simulation of the proposed designs and proving their validity. © 2010 Elsevier Ltd. All rights reserved.
Journal of Network and Computer Applications (10958592) 34(2)pp. 731-738
In recent years Bluetooth has been a growing technology for ad-hoc wireless communication between embedded devices, in a range of 10 m. Such a network is called a piconet, in which a master employs round-robin mechanism to poll and serve the slaves. Many new products have become Bluetooth enabled recently, which may cause a complex mix of traffic. Two types of connection are being offered in this network: the Synchronous Connection-Oriented (SCO), and the Asynchronous Connection-Less (ACL). ACL is proper for best-effort service as in IP networks, which has gained more importance as now a day's most of the applications have moved to IP, and may carry bursty traffic. In this paper, to analyze the Bluetooth ACL, a general model for complex mix of traffic, which is the superposition of many ONOFF mini-sources, is considered. Moreover, due to unreliable nature of wireless media, a Channel Failure Rate (CFR) has been assumed. This system has been investigated analytically and its performance has been evaluated. The effects of channel failure and packet size on the mean delay and waiting time have been studied by simulations as well. © 2010 Elsevier Ltd. All rights reserved.
Journal of Systems Architecture (13837621) 55(3)pp. 180-187
Nowadays, quantum cellular automata (QCA) has been considered as the pioneer technology in next generation computer designs. QCA provides the computer computations at nano level using molecular components as computation units. Although the QCA technology provides smaller chip area and eliminates the spatial constraints than earlier CMOS technology, but different characteristics and design limitations of QCA architectures have led to essential attentions in replacement of traditional structures with QCA ones. Inherent information flow control, limited wire length, and consumed area are of such features and restrictions. In this paper, D flip-flop structure has been considered and we have proposed two new D flip-flop structures which employ the inherent capabilities of QCA in timing and data flow control, rather than ordinary replacement of CMOS elements with equivalent QCA ones. The introduced structures involve small number of cells in contrast to earlier proposed ones in presence of the same or even lower input to output delay. The proposed structures are simulated using the QCADesigner and the validity of them has been proved. © 2008 Elsevier B.V. All rights reserved.
Computer Communications (1873703X) 30(13)pp. 2676-2685
In recent years, Multi Protocol Label Switching (MPLS) has been considered as the preeminent technology to incur Quality of Service (QoS) for integrated services. However, in wireless networks the remotes mobility endangers resource management procedure and QoS provisioning. In this paper we propose a new location prediction method based on Evolving Fuzzy Neural Networks (EFuNNs), to manage Label Switched Paths (LSPs) in an MPLS domain. The proposed predictor employs geographical characteristics of underlying area and the movement history of a remote, to produce a set of confidence ratios as the output. That set is considered as a criterion for establishing and managing LSPs so that QoS preserved. The simulation results have shown superior performance in terms of prediction accuracy and utilization improvement for the proposed methods. © 2007 Elsevier B.V. All rights reserved.
Shahgholi, B. ,
Shahbazi, H. ,
Kazemifard M. ,
Zamanifar, K. ,
Shahgholi, B. ,
Shahbazi, H. ,
Kazemifard M. ,
Zamanifar, K. 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 74-80
RoboCupRescue Simulation System is a platform for designing and implementing various artificial intelligent issues. In rescue simulation environments, Firebrigades should select fire points in a collaborative manner such that the total achieved result is optimized. In this work, we are going to propose a new method for fire prediction and selection in Firebrigade agents. This method is based on Evolving Fuzzy Neural Networks to obtain a set of trained fuzzy rules as rule base of Firebrigades Fire Selection System to select targets autonomously.
2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 2006pp. 156-161
Growing demand for different services over cellular mobile networks has emphasized the necessity of QOS provisioning. However, nodes mobility jeopardizes the resource allocation process, and decreases the quality of service provided to delay sensitive traffic. So, in such networks, the remote location prediction has significant impact on near-optimum resource management procedure. In the other hand, in recent years, MPLS has been considered as the preeminent technology to incur QOS for integrated services. In this paper we propose a new location prediction method based on Neural Networks, to manage LSPs in an MPLS domain. Proposed predictor uses geographical characteristics of underlying area, in addition to the movement history of that remote. A set of confidence ratios is considered as the output of our predictor. That set is considered as a criterion for establishing and managing LSPs. Each output of the predictor indicates the degree of confidence for the corresponding neighboring cell, showing that how likely the remote may move to that cell. This procedure proposes two types of Pre-Established LSP, called "Simple LSP" and "ConstRaint based LSP". A set of simulations with typical assumptions has been carried out to evaluate the performance and response time of the proposed method on QOS.