Articles
Sedaghatbaf, A.,
Moradi, M.,
Almasizadeh, J.,
Sangchoolie, B.,
Van acker, B.,
Denil, J. pp. 57-64
Cyber-Physical Systems (CPSs) are increasingly used in various safety-critical domains; assuring the safety of these systems is of paramount importance. Fault Injection is known as an effective testing method for analyzing the safety of CPSs. However, the total number of faults to be injected in a CPS to explore the entire fault space is normally large and the limited budget for testing forces testers to limit the number of faults injected by e.g., random sampling of the space. In this paper, we propose DELFASE as an automated solution for fault space exploration that relies on Generative Adversarial Networks (GANs) for optimizing the identification of critical faults, and can run in two modes: active and passive. In the active mode, an active learning technique called ranked batch-mode sampling is used to select faults for training the GAN model with, while in the passive mode those faults are selected randomly. The results of our experiments on an adaptive cruise control system show that compared to random sampling, DELFASE is significantly more effective in revealing system weaknesses. In fact, we observed that compared to random sampling that resulted in a fault coverage of around 10%, when using the active and passive modes, the fault coverage of DELFASE could be as high as 89% and 81%, respectively. © 2022 IEEE.
Telecommunication Systems (10184864)69(2)pp. 171-186
An anonymous communication network (ACN) is supposed to provide its users with anonymity attributes. As a practical matter, we need to have a means to predict the level of the anonymity provided by ACN. In this paper, we propose a probabilistic model for the security analysis of ACNs, thereby quantifying their loss of anonymity. To be precise, we have tried to obtain the probability distribution of potential senders of a message sent from an unknown source to a specific destination. With the probability distribution in hand, it is possible to define and derive some anonymity metrics. The evaluated metrics help us to gain an understanding of how much such a network may be vulnerable to attacks aiming at revealing the identity of senders of messages. Consequently, new rerouting policies and strategies can be utilized to increase the anonymity level of the network. The quantitative analysis is performed from a general perspective and the applicability of the model is not limited to a specific network. The evaluation process of the metrics using the proposed model is clarified by an illustrative example. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.
Computer Communications (1873703X)52pp. 47-59
In this paper, we propose a new approach for quantitative security analysis of computer systems. We intend to derive a metric of how much private information about a computer system can be disclosed to attackers. In fact, we want to introduce a methodology in order to be able to quantify our intuitive interpretation of how attackers act and how much they are predictable. This metric can be considered as an appropriate indicator for quantifying the security level of computer systems. We call the metric "Mean Privacy" and suggest a method for its quantification. It is quantified by using an information-theoretic model. For this purpose, we utilize a variant of attack tree that is able to systematically represent all feasible malicious attacks that are performed to violate the security of a system. The attack tree, as the underlying attack model, will be parameterized with some probability mass functions. The quantitative model will be used to express our intuition of the complexity of the attacks quantitatively. The usefulness of the proposed model lies in the context of security analysis. In fact, the analysis approach can be employed in some ways: Among several options for a system, we can indicate the most secure one using the metric as a comparative indicator. The security analysis of systems that operate under a variety of anticipated attack plans and different interaction environments can be carried out. Finally, new security policies, countermeasures and strategies can be applied to increase the security level of the systems. © 2014 Elsevier B.V. All rights reserved.
Computer Networks (13891286)57(10)pp. 2159-2180
To trust a computer system that is supposed to be secure, it is necessary to predict the degree to which the system's security level can be achieved when operating in a specific environment under cyber attacks. In this paper, we propose a state-based stochastic model for obtaining quantitative security metrics representing the level of a system's security. The main focus of the study is on how to model the progression of an attack process over time. The basic assumption of our model is that the time parameter plays the essential role in capturing the nature of an attack process. In practice, the attack process will terminate successfully, possibly after a number of unsuccessful attempts. What is important is, indeed, the estimation of how long it takes to be conducted. The proposed stochastic model is parameterized based on a suitable definition of time distributions describing attacker's actions and system's reactions over time. For this purpose, probability distribution functions are defined and assigned to transitions of the model for characterizing the temporal aspects of the attacker and system behavior. With the definition of the distributions, the stochastic model will be recognized to be a semi-Markov chain. This mathematical model will be analytically solved to calculate the desirable quantitative security metrics, such as mean time to security failure and steady-state security. The proposed method shows a systematic development of the stochastic modeling techniques and concepts, used frequently in the area of dependability evaluation, for attack process modeling. Like any other modeling method, the proposed model is also constructed based on some underlying assumptions, which are specific to the context of security analysis. © 2013 Elsevier B.V. All rights reserved.