The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - March-April (2013 vol.28)
pp: 74-80
ABSTRACT
This article studies how people reveal private information in strategic settings in which participants need to negotiate over resources but are uncertain about each other's objectives. The study compares two negotiation protocols that differ in whether they allow participants to disclose their objectives in a repeated negotiation setting of incomplete information. Results show that most people agree to reveal their goals when asked, and this leads participants to more beneficial agreements. Machine learning was used to model the likelihood that people reveal their goals in negotiation, and this model was used to make goal request decisions in the game. In simulation, use of this model is shown to outperform people making the same type of decisions. These results demonstrate the benefit of this approach towards designing agents to negotiate with people under incomplete information.
INDEX TERMS
negotiation support systems, computer games, decision making, decision theory, learning (artificial intelligence), decision-theoretic reasoning, human negotiation, private information, negotiation protocols, incomplete information, repeated negotiation setting, machine learning, goal request decision making, beneficial agreements, Games, Protocols, Collaborative work, Human factors, Decision support systems, Decision making, Learning (artificial intelligence), decision support, computer-supported cooperative work, multiagent negotiation, evaluation/methodology
CITATION
S. Dsouza, Y. K. Gal, P. Pasquier, S. Abdallah, I. Rahwan, "Reasoning about Goal Revelation in Human Negotiation", IEEE Intelligent Systems, vol.28, no. 2, pp. 74-80, March-April 2013, doi:10.1109/MIS.2011.93
REFERENCES
1. P. Kollock, “The Emergence of Exchange Structures: An Experimental Study of Uncertainty, Commitment, and Trust,” Am. J. Sociology, 1994, pp. 313–345.
2. S. Kraus et al., “Resolving Crises through Automated Bilateral Negotiations,” Artificial Intelligence, vol. 172, no. 1, 2008, pp. 1–18.
3. P. Carnevale, D.G. Pruitt, and S.D. Seilheimer, “Looking and Competing: Accountability and Visual Access in Integrative Bargaining,” J. Personality and Social Psychology, vol. 40, no. 1, 1981, pp. 111–120.
4. G. Loewenstein and D.A. Moore, “When Ignorance Is Bliss: Information Exchange and Inefficiency in Bargaining,” J. Legal Studies, vol. 33, no. 1, 2004, pp. 37–58.
5. I. Rahwan et al., “Argumentation-Based Negotiation,” Knowledge Engineering Rev., vol. 18, no. 4, 2003, pp. 343–375.
6. S.J. Russell et al., Artificial Intelligence: A Modern Approach, Prentice Hall, 1995.
7. A. Rubinstein, “A Bargaining Model with Incomplete Information about Preferences,” Econometrica, vol. 53, no. 5, 1985, pp. 1151–1172.
8. G.F. Loewenstein, M.H. Bazerman, and L. Thompson, “Social Utility and Decision Making in Interpersonal Contexts,” J. Personality and Social Psychology, vol. 57, no. 3, 1989, pp. 426–441.
9. J. Loewenstein and J. Brett, “Goal Framing Predicts Strategy Revision: When and Why Negotiators Reach Integrative Agreements,” Proc. Cognitive Science Soc., Cognitive Science Soc., 2007; http://csjarchive.cogsci.rpi.edu/proceedings/ 2007/docsp443.pdf.
10. P. Pasquier et al., “An Empirical Study of Interest-Based Negotiation,” Autonomous Agents and Multi-Agent Systems, vol. 22, no. 2, 2011, pp. 249–288.
11. S. Kraus, K. Sycara, and A. Evenchik, “Reaching Agreements through Argumentation: A Logical Model and Implementation,” Artificial Intelligence, vol. 104, nos. 1–2, 1998, pp. 1–68.
28 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool