A problem that received recent attention is the development of negotiation/cooperation techniques for solving naturally distributed problems with privacy requirements. An important amount of research focused on those problems that can be modeled with distributed constraint satisfaction, where the constraints are the secrets of the participants. Distributed AI develops techniques where the agents solve such problems without involving trusted servers. Some of the existing techniques aim for various tradeoffs between complexity and privacy guarantees [MTSY03], some aim only at high efficiency [ZM04], while others aim to offer maximal privacy [Sil03]. While the last mentioned work achieves an important level of privacy, it seems to be very slow. The technique we propose builds on that work, maintaining the same level of privacy, but being an order of magnitude faster, reaching the optimality in efficiency for this type of privacy. Unfortunately, all the versions of the new technique have an exponential space requirement, namely requiring the agents to store a value for each tuple in the search space. However, all existing techniques achieving some privacy were also applicable only to very small problems (typically 15 variables, 3 values), which for our technique means 14MB of required memory. Practically speaking, it improves the privacy with which this problems can be solved, and improves the efficiency with which n/2-privacy can be achieved, while remaining inapplicable for larger problems.
Silaghi, M.C. (2004). A faster technique for distributed constraint satisfaction and optimization with privacy enforcement (CS-2004-01). Melbourne, FL. Florida Institute of Technology.