Issue No. 09 - September (2010 vol. 22)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TKDE.2009.138
Jianshu Weng , Singapore Management University, Singapore
Zhiqi Shen , Nanyang Technological University, Singapore
Chunyan Miao , Nanyang Technological University, Singapore
Angela Goh Eck Soong , Nanyang Technological University, Singapore
Cyril Leung , The University of British Columbia, Vancouver
Usually, agents within multiagent systems represent different stakeholders that have their own distinct and sometimes conflicting interests and objectives. They would behave in such a way so as to achieve their own objectives, even at the cost of others. Therefore, there are risks in interacting with other agents. A number of computational trust models have been proposed to manage such risk. However, the performance of most computational trust models that rely on third-party recommendations as part of the mechanism to derive trust is easily deteriorated by the presence of unfair testimonies. There have been several attempts to combat the influence of unfair testimonies. Nevertheless, they are either not readily applicable since they require additional information which is not available in realistic settings, or ad hoc as they are tightly coupled with specific trust models. Against this background, a general credibility model is proposed in this paper. Empirical studies have shown that the proposed credibility model is more effective than related work in mitigating the adverse influence of unfair testimonies.
Agent, trust, credibility, unfair testimonies.
Z. Shen, C. Leung, C. Miao, J. Weng and A. G. Soong, "Credibility: How Agents Can Handle Unfair Third-Party Testimonies in Computational Trust Models," in IEEE Transactions on Knowledge & Data Engineering, vol. 22, no. , pp. 1286-1298, 2009.