A Review on Secure Inference From Neural Networks
Abstract
The rise of cloud computing has transformed how we process and analyse data, particularly in the domain of machine learning as a service (MLaaS). Protecting data privacy and proprietary models has become paramount in this evolving landscape. The challenge lies in ensuring accurate and reliable inference while safeguarding sensitive elements such as model parameters (weights and biases) and client data. The security landscape has traditionally relied on cryptographic approaches, including garbled circuits (GC), homomorphic encryption (HE), and oblivious transfer (OT), to protect inference processes. However, the emergence of function secret sharing (FSS) has introduced a more streamlined approach, offering reduced computational and communicatio n complexity. While FSS has proven effective for secure inference under semi-honest threat models, it faces a significant limitation: its dependence on the assumption that the trusted third party (TTP) will not engage in collusion with other participants. This assumption represents a potential vulnerability in the system’s security framework. We thoroughly examine various secure inference schemes for neural networks (NNs). By examining and comparing the strengths and limitations of each scheme, we aim to provide researchers with valuable insights into artificial intelligence security. This comparative analysis is a resource for those working in related fields, particularly in neural networks, helping them make informed decisions about security implementations in their research and applications. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2025.