Modeling and quantitative verification of trust systems against malicious attackers
Abstract
Nowadays, trust systems (TSs) are widely used for tackling dishonest entities in many modern environments. However, these systems are vulnerable to some kinds of attacks where attackers try to deceive the system using sequences of misleading behaviors and dishonest recommendations. A robust TS is expected to function properly even in the possibility of such attacks. To the best of our knowledge, simulation has been the main approach for evaluation of TSs so far, and there is no remarkable verification method for this aim. In this paper, a method for quantitative verification of TSs' robustness against malicious attackers is proposed. The proposed method consists of a formalism for specifying any given trust model named TS attack process that is cast into partially observable Markov decision process mathematical framework. The proposed method is capable of verifying TSs against both well-known attacks and the worst possible attack scenario. The method could also be used to help adjusting parameters of the given TS. Moreover, a quantitative robustness measure is introduced, which helps to compare the robustness of different TSs. To illustrate the applicability of the proposed method, a number of case studies for analysis and comparison of selected trust models (including Subjective Logic and REGRET) are presented. © 2015 The British Computer Society 2015.