Articles
SN Computer Science (2662995X)6(6)
The rise of cloud computing has transformed how we process and analyse data, particularly in the domain of machine learning as a service (MLaaS). Protecting data privacy and proprietary models has become paramount in this evolving landscape. The challenge lies in ensuring accurate and reliable inference while safeguarding sensitive elements such as model parameters (weights and biases) and client data. The security landscape has traditionally relied on cryptographic approaches, including garbled circuits (GC), homomorphic encryption (HE), and oblivious transfer (OT), to protect inference processes. However, the emergence of function secret sharing (FSS) has introduced a more streamlined approach, offering reduced computational and communicatio n complexity. While FSS has proven effective for secure inference under semi-honest threat models, it faces a significant limitation: its dependence on the assumption that the trusted third party (TTP) will not engage in collusion with other participants. This assumption represents a potential vulnerability in the system’s security framework. We thoroughly examine various secure inference schemes for neural networks (NNs). By examining and comparing the strengths and limitations of each scheme, we aim to provide researchers with valuable insights into artificial intelligence security. This comparative analysis is a resource for those working in related fields, particularly in neural networks, helping them make informed decisions about security implementations in their research and applications. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2025.
Mahvash, M.,
Moghim, N.,
Mahdavi, M.,
Amiri, M.,
Shetty, S. Pervasive and Mobile Computing (15741192)106
Cooperative spectrum sensing (CSS) in cognitive radio networks (CRNs) enhances spectral decision-making precision but introduces vulnerabilities to malicious secondary user (SU) attacks. This paper proposes a decentralized trust and reputation management (TRM) framework to address these vulnerabilities, emphasizing the need to mitigate risks associated with centralized systems. Inspired by blockchain technology, we present a distributed TRM method for CSS in CRNs, significantly reducing the impact of malicious attacks. Our approach leverages a Proof of Trust (PoT) system to enhance the integrity of CSS, thereby improving the accuracy of spectral decision-making while reducing false positives and false negatives. In this system, SUs’ trust scores are dynamically updated based on their sensing reports, and they will collaboratively participate in new blocks' formation using the trust scores. Simulation results validate the effectiveness of the proposed method, indicating its potential to enhance security and reliability in CRNs. © 2024 Elsevier B.V.
Journal of Supercomputing (15730484)81(7)
Mobile Crowd Sensing (MCS)-based spectrum monitoring emerges to check the status of the spectrum for dynamic spectrum access. For privacy-preserving purposes, spectrum sensing reports may be sent anonymously. However, anonymous submission of reports increases the probability of fake reports by malicious participants. Also, it is necessary to assign a fair reward to encourage the honest participants, which needs to take into account participant’s reputation. In this research, a method is presented for MCS-based spectrum monitoring which uses Hyperledger Fabric and Identity Mixer (Idemix). This framework overcomes security challenges such as providing anonymity of the participants, identifying malicious participants, detecting intentional and unintentional incorrect reports, and providing a secure protocol to reward participants. An intuitive evaluation of the security features of the proposed method confirms that the proposed method withstands key threats, such as de-anonymization, participant misbehavior, privacy-compromising collusion among system entities, and reputation manipulation attack. Also, numerical evaluations show that the proposed method is superior compared to the similar centralized method in terms of delay when the number of participants is sufficiently large. Specifically, it achieves an average improvement of approximately 39% in scenarios involving 1000 to 2000 participants, and more than a twofold reduction in delay for the case with 2000 participants. Notably, this enhancement comes without a substantial increase in signaling overhead, which remains only slightly more than double that of the centralized method. Moreover, simulations show that the proposed method can successfully distinguish malicious participants from the honest ones in most scenarios. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.