Accepted papers

Svetlana Obraztsova, Omer Lev, Maria Polukarov, Zinovi Rabinovich and Jeffrey S. RosenscheinFarsighted Voting Dynamics

Abstract: Iterative voting has presented, in the past few years, a voting model in which a player is presented with an election poll, and changes their vote to influence the result immediately. Several extensions have been presented for this model, including some attempts to handle the uncertainty facing the players, but all of them retained the myopic assumption — players change their vote only when they believe they might be changing the outcome by their move.This paper tackles this assumption by bounding the farsightedness of the players. Players will change their vote if they believe that if a certain number of other voters will change as well, the outcome might change. We show that players with the same farsightedness will converge to a Nash equilibrium with plurality, and with veto, even players with varying farsightedness degree will always converge. However, we show farsightedness is not necessarily a positive feature — in several cases it is better for the player to be myopic.
Camera-ready version (pdf)

Svetlana Obraztsova, Edith Elkind, Maria Polukarov and Zinovi RabinovichDoodle poll games

Abstract: In Doodle polls, each voter may approve a subset of the available alternatives according to her preferences. While such polls can be captured by the standard models of Approval voting, Zou et al. (2015) analyse real-life Doodle poll data and conclude that poll participants’ behavior seems to be affected by considerations other than their intrinsic preferences over the alternatives. In particular, they propose a model of social voting, where participants approve their top alternatives as well as additional `safe’ choices so as to appear cooperative. The appeal of their model is that it appears to be consistent with the real-life data. However, Zou et al. explicitly describe the voters’ strategies rather than attempt to rationalize them in terms of voters’ preferences. In this paper, we complement the work of Zou et al.~by putting forward a model in which the behavior described by Zou et al. arises as an equilibrium strategy. We do so by formally modeling the bonus that voters derive from approving additional alternatives. We consider two scenarios: one where this bonus is accumulated over all approved alternatives, and one where it is capped by a certain threshold. The former model turns out to be easier to analyse, but its predictions do not appear realistic; in contrast, trembling hand perfect Nash equilibria of the latter model behave consistently with the model of Zou et al.
Camera-ready version (pdf)

Elliot Anshelevich, Onkar Bhardwaj and John Postl. Approximating Optimal Social Choice under Metric Preferences

Abstract: We examine the quality of social choice mechanisms using a utilitarian view, in which all of the agents have costs for each of the possible alternatives. While these underlying costs determine what the optimal alternative is, they may be unknown to the social choice mechanism; instead the mechanism must decide on a good alternative based only on the ordinal preferences of the agents which are induced by the underlying costs. Due to its limited information, such a social choice mechanism cannot simply select the alternative that minimizes the total social cost (or minimizes some other objective function). Thus, we seek to bound the distortion: the worst-case ratio between the social cost of the alternative selected and the optimal alternative. Distortion measures how good a mechanism is at approximating the alternative with minimum social cost, while using only ordinal preference information. The underlying costs can be arbitrary, implicit, and unknown; our only assumption is that the agent costs form a metric space, which is a natural assumption in many settings. We quantify the distortion of many well-known social choice mechanisms. We show that for both total social cost and median agent cost, many positional scoring rules have large distortion, while on the other hand Copeland and similar mechanisms perform optimally or near-optimally, always obtaining a distortion of at most 5. We also give lower bounds on the distortion that could be obtained by any deterministic social choice mechanism, and extend our results on median agent cost to more general objective functions.
Camera-ready version (pdf)

Elliot Anshelevich and Shreyas SekarMarket Price Competition in Networks

Abstract: We study the efficiency of allocations in large markets with a network structure where every seller owns an edge in a graph and every buyer desires a path connecting some nodes. While it is known that stable allocations can be very inefficient, the exact properties of equilibria in markets with multiple sellers are not fully understood, even in single-source single-sink networks. In this work, we show that for a large class of buyer demand functions, equilibrium always exists and allocations can often be close to optimal. In the process, we characterize the structure and properties of equilibria using techniques from min-cost flows, and obtain tight bounds on efficiency in terms of the various parameters governing the market, especially the number of monopolies M.
Although monopolies can cause large inefficiencies in general, our main results for single-source single-sink networks indicate that for several natural demand functions the efficiency only drops linearly with M. For example, for concave demand we prove that the efficiency loss is at most a factor 1+M/2 from the optimum, for demand with monotone hazard rate it is at most 1+M, and for polynomial demand the efficiency decreases logarithmically with M. In contrast to previous work that showed that monopolies may adversely affect welfare, our main contribution is showing that monopolies may not be as `evil’ as they are made out to be. Finally, we consider more general, multiple-source networks and show that in the absence of monopolies, mild assumptions on the network topology guarantee an equilibrium that maximizes social welfare.
Camera-ready version (pdf)

Jose Correa, Marcos Kiwi, Neil Olver and Alberto Vera. Adaptive Rumor Spreading

Abstract: Rumor spreading is a basic model for information dissemination in a social network.
In this setting a central concern for an entity, say the service provider, is to find ways to speed up the dissemination process and to maximize the overall information spread.
In the last decade there have been multiple approaches to deal with this loosely defined problem, including the well known influence maximization problem.
A central issue absent in the first model is that of adaptivity.
How can the service provider use information about the current state of the network to cost effectively speed up the process? Motivated by the recent emergence of the so-called opportunistic communication networks, we take a novel approach by considering the issue of adaptivity in the most basic continuous time (asynchronous) rumor spreading process.
In our setting a rumor has to be spread to a population and the service provider can push it at any time to any node in the network and has unit cost for doing this.
On the other hand, as usual in rumor spreading, upon meeting, nodes share the rumor and this imposes no cost on the service provider.
Rather than fixing a budget on the number of pushes, we consider the cost version of the problem with a fixed deadline and ask for a minimum cost strategy that spreads the rumor to every node.
A non-adaptive strategy can only intervene at the beginning and at the end, while an adaptive strategy has full knowledge and intervention capabilities.
Our main result is that in the homogeneous case (where every pair of nodes randomly meet at the same rate) the benefit of adaptivity is bounded by a constant.
This requires a subtle analysis of the underlying random process that is of interest in its own right.
Camera-ready version (pdf)

Christian Kroer and Tuomas SandholmImperfect-Recall Abstractions with Bounds

Abstract: We develop the first general, algorithm-agnostic, solution quality guarantees for Nash equilibria and approximate self-trembling equilibria computed in imperfect-recall abstractions, when implemented in the original (perfect-recall) game. Our results are for a class of games that generalizes the only previously known class of imperfect-recall abstractions where any results had been obtained. Further, our analysis is tighter in two ways, each of which can lead to an exponential reduction in the solution quality error bound.
We then show that for extensive-form games that satisfy certain properties, the problem of computing a bound-minimizing abstraction for a single level of the game reduces to a clustering problem, where the increase in our bound is the distance function. This reduction leads to the first imperfect-recall abstraction algorithm with solution quality bounds. We proceed to show a divide in the class of abstraction problems. If payoffs are at the same scale at all information sets considered for abstraction, the input forms a metric space, and this immediately yields a $2$-approximation algorithm for abstraction. Conversely, if this condition is not satisfied, we show that the input does not form a metric space. Finally, we provide computational experiments to evaluate the practical usefulness of the abstraction techniques. They show that running counterfactual regret minimization on such abstractions leads to good strategies in the original games.
Camera-ready version (pdf)

Catherine Moon and Vincent ConitzerRole Assignment for Game-Theoretic Cooperation

Abstract: In multiagent systems, often agents need to be assigned to different roles}. Multiple aspects should be taken into account for this, such as agents’ skills and constraints posed by existing assignments. In this paper, we focus on another aspect: when the agents are self-interested, careful role assignment is necessary to make cooperative behavior an equilibrium of the repeated game. We formalize this problem and provide an easy-to-check necessary and sufficient condition for a given role assignment to induce cooperation. However, we show that finding whether such a role assignment exists is in general NP-hard. Nevertheless, we give two algorithms for solving the problem. The first is based on a mixed-integer linear program formulation. The second is based on a dynamic program, and runs in pseudopolynomial time if the number of agents is constant. However, the first algorithm is much, much faster in our experiments.
Camera-ready version (pdf)

Amos Fiat, Yishay Mansour and Mariano Schain. History-Independent Distributed Multi-Agent Learning

Abstract: How should we evaluate a rumor? We address this question in a setting where multiple agents seek an estimate of the probability, $b$, of some future binary event. A common uniform prior on $b$ is assumed.
A rumor about $b$ meanders through the network, evolving over time. The rumor evolves, not because of ill will or noise, but because agents incorporate private signals about $b$ before passing on the (modified) rumor. The loss to an agent is the (realized) square error of her opinion.Our setting introduces strategic behavior based on evidence regarding an exogenous event to current models of rumor/influence propagation in social networks. We study a simple Exponential Moving Average (EMA) for combining experience evidence and trusted advice (rumor), quantifying its resulting performance and comparing it to the optimal achievable using Bayes posterior using the agents private signals.We study the quality of $p_T$, the prediction of the last agent along a chain of $T$ rumor-mongering agents. The prediction $p_T$ can be viewed as an aggregate estimator of $b$ that depends on the private signals of $T$ agents.We show that1. When agents know their position in the rumor-mongering sequence, the expected mean square error of the aggregate estimator is $\Theta(\frac{1}{T})$. Moreover, with probability $1-\delta$, the aggregate estimator’s deviation from $b$ is $\Theta\left(\sqrt{\frac{\ln (1/\delta)}{T}}\right)$.2. If the position information is not available, and agents are in equilibrium, the aggregate estimator has a mean square error of $O(\frac{1}{\sqrt{T}})$. Furthermore, with probability $1~-~\delta$, the aggregate estimator’s deviation from $b$ is $\widetilde{O}\left(\sqrt{\frac{\ln(1/\delta)}{\sqrt{T}}}\right)$.
Camera-ready version (pdf)

Anjon Basak and Christopher Kiekintveld. Abstraction Using Analysis of Subgames

Abstract: Normal form games are one of the most familiar representations for modeling interactions among multiple agent. However, modeling many realistic interactions between agents results in games that are extremely large. In these cases computing standard solutions like Nash equilibrium may be intractable. To overcome this issue the idea of abstraction has been investigated, most prominently in research on computer Poker. Solving a game using abstraction requires using some method to simplify the game before it is analyzed. We study a new variation for solving normal form games using abstraction that is based on finding and solving suitable sub games. We compare this method with several variations of a common type of abstraction based on clustering similar strategies.
Camera-ready version (pdf)

Haifeng Xu, Albert Xin Jiang, Arunesh Sinha, Zinovi Rabinovich, Shaddin Dughmi and Milind TambeSecurity Games with Information Leakage: Modeling and Computation

Abstract: Most models of Stackelberg security games assume that the attacker only knows the defender’s mixed strategy, but is not able to observe (even partially) the instantiated pure strategy. Such partial observation of the deployed pure strategy — an issue we refer to as information leakage — is a significant concern in practical applications. While previous research on patrolling games has considered the attacker’s real-time surveillance, our settings, therefore models and techniques, are fundamentally different. More specifically, after describing the information leakage model, we start with an LP formulation to compute the defender’s optimal strategy in the presence of leakage. Perhaps surprisingly, we show that a key subproblem to solve this LP (more precisely, the defender oracle) is NP-hard even for the simplest of security game models. We then approach the problem from three possible directions: efficient algorithms for restricted cases, approximation algorithms, and heuristic algorithms for sampling that improves upon the status quo. Our experiments confirm the necessity of handling information leakage and the advantage of our algorithms.
Camera-ready version (pdf)

Amulya Yadav, Thanh Nguyen, Francesco Delle Fave, Milind Tambe, Noa Agmon, Manish Jain, Widodo Ramono and Timbul Batubara. Handling Payoff Uncertainty with Adversary Bounded Rationality in Green Security Domains

Abstract: Research on Stackelberg Security Games (SSG) has recently shifted to green security domains, for example, protecting wildlife from illegal poaching. Previous research on this topic has advocated the use of behavioral (bounded rationality) models of adversaries in SSG. As its first contribution, this paper, for the first time, provides validation of these behavioral models based on real-world data from a wildlife park. The paper’s next contribution is the first algorithm to handle payoff uncertainty — an important concern in green security domains — in the presence of such adversarial behavioral models.
Camera-ready version (pdf)