Workshop on Privacy and Economics

In Conjunction with EC 2013
June 16, 2013: at the University of Pennylvania, Philadelphia PA
Organizers: Katrina Ligett and  Aaron Roth

Overview: Privacy concerns (and mishaps) are escalating in social and economic settings: many large scale market operations are (in an ad hoc manner) either negotiating policies for privacy protection, or, conversely, behaving as brokers for trading private data. This workshop aims to bring together researchers studying principled methodologies for dealing with the privacy issues facing modern markets, from the perspective of both economics and computer science. This study has recently generated a small literature in the computer science community, facilitated by the recent advances in "differential privacy", as a tool for quantifying the harm caused to individuals by their loss in "privacy". This quantitative theory for privacy provides for the first time a rigorous tool to allow the formal study of privacy as an economic quantity. However, differential privacy forms its own (increasingly technical) literature, and learning about it might seem to be an imposing start-up cost for researchers in the EC community interested in contributing to this literature. One goal of this workshop is to overcome this startup cost and encourage broader participation in the study of the economics of privacy. This half day workshop will begin with a tutorial on differential privacy aimed at the broader EC community, to be followed by a series of short talks about recent work in the field.

Call for Participation: Following a tutorial, the workshop will consist of a series of short talks. We invite the submission of abstracts (500 words or less),  from which we will select a small number (3-5) for presentation at the workshop. Talks about work in progress and previously published work are both encouraged. Please submit your abstracts by April 20 by emailing them to katrina [at] caltech [dot] edu. Decisions will be made by the end of April (before the early registration deadline for EC).

Time Talk/Drink/Eat
8:00am-9:00am Coffee and Continental Breakfast
9:00am-10:00am Tutorial on Differential Privacy and Mechanism Design -- Aaron Roth. See also this survey.
10:00am-10:30am David Xiao -- "Is Privacy Compatible with Incentives?"
10:30am-11:00am Coffee!
11:00am-11:30am Mallesh Pai -- "Differential Privacy as a Tool in Mechanism Design: Approximate Strategyproofness in Large Games"
11:30am-12:00pm Ankit Sharma -- "When Does Differentially Private Information Help?"
12:00pm-12:30pm Zhiyi Huang -- "Differentially Private and Truthful Mechanisms"
12:30pm-2:00pm Lunch
2:00-2:30pm Gergely Biczók -- "Interdependent Privacy -- Let me Share Your Data"
2:30-3:00pm Rachel Cummings -- "Individual Preferences for Privacy"
3:00-3:30pm Garrett Johnson -- "Impact of Privacy Policy on the Auction Market for Online Display Advertising"
3:30-4:00pm Coffee!


Title: Is Privacy Compatible with Incentives?
Speaker: David Xiao
Individuals value their privacy, and therefore are unlikely to reveal their private information without incentives.  Working in a game-theoretic framework, we study how to protect players while also encouraging them to reveal their private information.  Differential privacy gives a strong notion of privacy in this setting, but it turns out that it can come into conflict with the equally desirable property of truthfulness, and so care must be taken in order to achieve both simultaneously.  For games with small type spaces, we present a generic transformation that makes mechanisms differentially private while preserving truthfulness.  We also explore the question of how to model players' valuations of privacy and how this may affect the truthfulness of mechanisms.

Title: "Differential Privacy as a Tool in Mechanism Design: Approximate Strategyproofness in Large Games"
Speaker: Mallesh Pai
We consider -large games-, which, like commuter traffic or stock investing, are strategic interactions among many players, each of whom has only a small affect on the welfare of others. One might like to design mechanisms in such games to suggest equilibrium play to the participants, but there are two potential concerns. The first is privacy: the computation of an equilibrium may reveal sensitive information about the utility function of one of the agents (i.e. his destination for that days drive, or his stock portfolio). The second concern relates to incentives: it may be beneficial for one of the agents to misreport his utility function to cause the mechanism to select a preferred, purported "equilibrium" of the reported game. We show how differential privacy can be brought to bear to solve both of these problems: we give a privacy preserving mechanism for computing the equilibria of a large game, which in turn implies an approximately truthful equilibrium selection mechanism.

Title: "When Does Differentially Private Information Help?"
Speaker: Ankit Sharma
Recessions and financial disasters are believed in part to be caused by financial decision-makers not knowing about relevant actions taken by others. However, a central agency providing such information would violate privacy. In this work we ask, to what extent can providing differentially private information about actions by others help? Specifically, we consider natural games modeling decision-making in financial markets in which information available to participants about other players' actions can greatly affect the quality of the outcome. In particular, these games have the property that with no information about others' actions there is significant risk of disaster, yet with full information, such bad events can be avoided. We consider for these games the question of providing differentially-private information, and give positive results showing that indeed this information can help substantialy for several natural game models.

Title: Differentially Private and Truthful Mechanisms
Speaker: Zhiyi Huang
We consider designing differentially private and truth mechanisms for social welfare maximization. On the information theoretic side, we propose using the exponential mechanism along with an appropriately chosen payment scheme, which is truthful, private, and approximately maximizes social welfare for all problems [Huang, Kannan, FOCS 2012]. Our implementation of the exponential mechanism is a generalization of the VCG mechanism in the sense that the VCG mechanism is the extreme case when the privacy parameter goes to infinity. On the computational side, we consider the combinatorial public project problem with coverage valuation functions. We introduce the first poly-time, differentially private, and truthful mechanism whose social welfare is at least a (1-1/e) fraction of the optimal less an o(1) additive loss [Huang, working paper 2013].

Title: Interdependent Privacy -- Let me Share Your Data
Speaker: Gergely Biczók
Users share massive amounts of personal information and opinion with each other and different service providers every day. In such an interconnected setting, the privacy of individual users is bound to be affected by the decisions of others, giving rise to the phenomenon which we term as interdependent privacy. In this paper we define online privacy interdependence, show its existence through a study of Facebook application permissions, and model its impact through an Interdependent Privacy Game (IPG). We show that the arising negative externalities can steer the system into equilibria which are inefficient for both users and platform vendor. We discuss how the underlying incentive misalignment, the absence of risk signals and low user awareness contribute to unfavorable outcomes. We also reflect on the recent changes in the Facebook application install process.

Title: Impact of Privacy Policy on the Auction Market for Online Display Advertising
Speaker: Garrett Johnson
The advent of online advertising has simultaneously created unprecedented opportunities for advertisers to target consumers and prompted privacy concerns among consumers and regulators. This paper estimates the financial impact of privacy policies on the online display ad industry by applying an empirical model to a proprietary auction dataset. Two challenges complicate the analysis. First, while the advertisers are assumed to publicly observe tracking profiles, the econometrician does not see this data. My model overcomes this challenge by disentangling the unobserved premium paid for certain users from the observed bids. In order to simulate a market in which advertisers can no longer track users, I set the unobserved bid premium’s variance to zero. Second, the data provider uses a novel auction mechanism in which first-price bidders and second- price bidders operate concurrently. I develop new techniques to analyze these hybrid auctions. I consider three privacy policies that vary by the degree of user choice. My results suggest that online publisher revenues drop by 3.9% under an opt-out policy, 34.6% under an opt-in policy, and 38.5% under a tracking ban. Total advertiser surplus drops by 4.6%, 40.9%, and 45.5% respectively.

Title: Individual Preferences for Privacy
Speaker: Rachel Cummings
In the digital age, many companies maintain records of their customers' individual behavior - credit card companies, online and brick-and-mortar retailers, insurance companies, and social media websites are a few examples. Each of these companies has sensitive information that uniquely links each user to their account. Individual behavior is no longer anonymous! These companies have developed services intended to bene t their customers, but users must incur some privacy loss in order to participate. Do users always gain from companies knowing their personal habits? When are users willing to change their behavior to prevent companies from learning their true preferences?
Most of the previous research in this area has focused on di erential privacy, where each agent's private information can have only a very small effect on the outcome of a mechanism. Differential privacy relies crucially on adding enough random noise to the outcome so that any individual's input is sufficiently ob- scured. However in many real-world settings, each participating agent is identified, and no noise can be added. Our main contribution is to develop a new model of privacy-aware preferences for studying individual choice behavior. To model the trade-off between protecting informational privacy and achieving a more desired outcome, we assume each agent has preferences over outcome-privacy pairs. A social planner asks each player to report her private type, or her preferences, and then uses a social choice function which maps from reported types to outcomes. To formalize the loss of privacy, we assume an adversary knows the social choice function and observes the outcome. The adversary then learns that the agent's reported type must be in the set of types that map to the observed outcome. Previous work has assumed that agents have preferences over outcomes, subject to a privacy constraint. By allowing agents to have preferences over privacy and outcomes together, our model is more robust to a wide variety of practical applications. We apply our model of individual privacy preferences to incomplete contracts in the principle-agent model, where a buyer and a seller enter into a binding contract to trade an item of unknown common value. After the contract has been written, the seller can exert some costly e ort which will raise the value of the item. The items value is then revealed, and as is standard in the economics literature, we assume Nash bargaining occurs to determine a selling price. When the seller's e ort level can be made private through the use of random noise, we show that a player's preferences for privacy depend on her risk attitude. We assign types based on risk attitudes, such that all players of the same type will have the same privacy preferences. Going forward, we plan to use insights gained from incomplete contracts to characterize the set of opti- mal social choice functions within our model more generally. The challenge is intrinsic in the nature of this problem: The social planner is trying to choose the best outcome with respect to the agents' preferences, while the agents are trying to prevent their preferences from being seen by the planner. We hope to find a closed form representation of the social choice functions that are Pareto optimal across types.