Collective rationality has attracted much attention by formal theorists, but philosophically, much of it is still poorly understood. The difficulties are easily motivated. As long as we only aggregate preferences (as we do in the case of majoritarian decision-making), there are different proposals for how to do so, and arguments on their behalves can be developed. However, there are voting methods that use rankings other than ordinal ones, and arguments for specific preference-based methods fail to be effective against such methods. For example, no such argument can address a proposal that, say, a committee should make a hiring decision by using a 100-point system. On this proposal, each committee member assigns between 0 and 100 points to each applicant, and then these assignments are aggregated by, for example, averaging. Would that system be as reasonable as purely preference-based methods? Or can we distinguish conditions under which ordinal rankings are appropriate from conditions under which other rankings are appropriate? Such questions call for a theory of voting methods that assesses, first, the conditions under which particular kinds of rankings are appropriate (e.g., ordinal or point systems); second, what specific voting method(s) is (are) appropriate for the specific kinds of rankings; and third, what the criteria for appropriateness are in both cases. Recall that I explained at the start that my most of my research could be understood as interrogating notions of justice and fairness: it is through this search for criteria of appropriateness that considerations of fairness enter here. We are far from having such a theory, and it is precisely this point that keeps me fascinated with this area.
My first area of research into collective rationality works on a segment of such a theory, addressing questions of preference aggregation, primarily questions about majoritarian decision-making, also an important topic within democratic theory. The philosophical complexities of majoritarian decision-making are often underestimated because one thinks of such decision-making in terms of choices between two options, and the classical arguments in support of making group choices in that manner are tailored towards that case. One may think that, if there are more than two options, one can reduce the choice to a sequence of pairwise decisions; this step, however, is question-begging vis-à-vis opponents such as the Borda count. My paper “Arrow's Theorem, Indeterminacy, and Multiplicity Reconsidered” (Ethics, 2001) develops an account of what majoritarian decision-making would even mean for the case of more than two options and explores how much one can say in its defense. Such a generalized account of majoritarian decision-making must be defended against incoherence objections that have been made following Arrow’s celebrated Impossibility theorem – objections that this method can meet.
What must be assessed in addition is whether majoritarian decision-making is argumentatively superior to competing methods of preference aggregation. Alas, it turns out that the dispute between majoritarian decision-making (a method that, for historical reasons, I call the Condorcet proposal) and its main competitor, the Borda count (with which majority rule coincides if there are only two options), is undecidable: neither can muster arguments that should convince the other. (This undecidability claim is a substantive view, not a declaration of bankruptcy in light of a difficult argumentative situation.) My view, which takes a stance on a debate that originates from the 18th century, has been severely attacked by the mathematician Donald Saari (in Ethics, 2003), who has offered what is probably the strongest case ever on behalf of Borda. My response appeared in “Why the Count de Borda Cannot Beat the Marquis de Condorcet” (Social Choice and Welfare, 2005), which was published with another reply by Saari. That paper rebuts Saari’s arguments in favor of Borda, finds them wanting, and offers additional arguments for my view that “Condorcet vs Borda” is argumentatively undecidable.
I have developed my assessment of majority rule further in “Arguing for Majority Rule” (Journal of Political Philosophy, 2004). While my earlier work showed that there is a coherent method of majoritarian decision-making and that it will be plausible as an account of collective choice in the same circumstances as the Borda count, this piece scrutinizes the substantive case available for majoritarian decision-making once non-preference-based decision rules are considered as well. This case is rather weak, and the arguments typically advanced on behalf of majority rule are problematic. From the standpoint of collective rationality, majoritarian decision-making (although coherent and argumentatively as strong as Borda) is overrated. I have also offered a historical addition to my approach to majoritarian decision-making by assessing Aristotle’s famous argument from the wisdom of the multitude, one of the classical arguments for democracy. “The Virtuous Group -- Foundations for the Argument from the Wisdom of the Multitude” (Canadian Journal of Philosophy, 2001) explores this argument in detail in the context of Aristotelian virtue theory, relates it to Rousseau’s account of the general will and contemporary accounts of deliberation, and argues that its import for contemporary democratic theory is limited. It should not play the role that it plays in contemporary democratic thought. I have also written a piece about another well-known issue in preference aggregation, Sen’s Liberal Paradox, which offers a view of what actually the problem is that is displayed by that “paradox,” and submit a solution to it for a range of cases (“What to Make of the Liberal Paradox?,” Theory and Decision , 2001) My second area of research in collective rationality has been Bayesian aggregation. Bayesian decision theory ascribes to agents both a subjective utility function (reflecting their values) and a subjective probability function (reflecting their beliefs), and advocates that the rational course of action maximizes expected utility (a view I endorse as an account of rationality). Suppose a number of Bayesians want for their group to act like a Bayesian agent as well. Then one must decide how to aggregate their utilities and probabilities. Different approaches are possible, in particular the following three. First, one may not aggregate probabilities at all (as done if one uses traditional von Neumann-Morgenstern utility theory); second, one may aggregate utilities and probabilities separately and let the group make decisions based on preferences that emerge once group probabilities and group utilities are put together (the so-called ex post approach); and third, one may put together probabilities and utilities at the individual level and then aggregate the preferences that emerge in this way directly into group preferences (ex ante approach). Research into the second and third approach has not yet advanced far.
I have made four contributions to this field. First, in joint work with Matthias Hild and Richard Jeffrey. I demonstrate an instability result for Bayesian ex post aggregation (which, suitably reformulated, becomes an impossibility result). According to this result, ex post aggregation displays a curious feature. Suppose one describes the same group decision situation in various ways that differ only in the degree of detail used in the description. Each individual would favor the same group decision in all scenarios, since we are looking at the same situation except that the relevant data would appear in more or less detail. However, the group choice may vary, and in fact flip back and forth between two possibilities, depending on how much detail one uses to describe the situation. The original instability (“flipping”) result was published in “Flipping and Ex Post Aggregation” (Social Choice and Welfare, 2003), and is also discussed in “Preference Aggregation after Harsanyi,” in a 2008 volume on Justice, Political Liberalism, and Utilitarianism: Themes from Harsanyi and Rawls. In addition I have investigated this instability theorem for a certain decision-theoretic framework (the Bolker-Jeffrey framework) and explored some parallels to this phenomenon in other fields (such as Simpson’s paradox in statistics). This work appeared in “Instability of Ex Post Aggregation in the Bolker-Jeffrey Framework and Related Instability Phenomena,” in Erkenntnis, 2001. I have also offered some philosophical considerations on the choice between the ex ante and the ex post framework in “Bayesian Group Agents and Two Modes of Aggregation,” in Synthese, 2003.
My second contribution is an analysis of the philosophical value of John Harsanyi’s so-called “Utilitarian Theorem.” Suppose we have a group of individuals who are rational according to von Neumann/Morgenstern expected utility theory. (So this is, strictly speaking, not a Bayesian scenario as defined above, or one in which each agent has the same probability function). Suppose these individuals want their group to be rational as well, and want the group preferences to obey a Pareto condition. In that case, the group utility function must be a weighted average over the individual functions. Harsanyi took his theorem to deliver a justification of utilitarianism on the basis of a theory of rationality. However, few philosophers have followed him in this, not even utilitarians. My paper on “Harsanyi’s ‘Utilitarian Theorem’ and Utilitarianism,” Nous, 2002), explores in which way this theorem can be useful for utilitarians.
Third, I wrote a paper on the rationality of Nash equilibria. The concept of equilibrium is the main solution conception proposed by game theory. Equilibria (especially Nash equilibria) have been treated as a rational course of action jointly proposed to a number of agents in an interactive setting. However, such a reading of equilibria leads to the question of how they are related to the expected-utility-maximization criterion of Bayesian expected utility theory. “What is Rational about Nash Equilibria?” (Synthese, 2000) argues that equilibria in one-time interactive settings do not count as rational solutions.
Fourth, I wrote one more technical project jointly with Matthias Hild and Richard Jeffrey. This does not concern Bayesian aggregation theory, but rather a problem that arises for one Bayesian agent. In 1976, Robert Aumann published the seemingly puzzling result that two Bayesian agents cannot have common knowledge of different probabilities of the same event. They cannot be completely transparent to each other about what they know and still disagree. Aumann took this result to show that the “Harsanyi doctrine” is false, that rational agents have the same prior probabilities while their disagreements are due to differing evidence. We disputed this implication by exploring how the technical notion of common knowledge involved in the theorem is much more demanding than the one required for this implication to hold. We also showed that Aumann’s theorem does not depend on the simple updating-rule of conditionalization, but holds under much more general conditions. The former point is developed primarily in “Aumann's ‘No Agreement’ Theorem Generalized,” in a 1999 volume on The Logic of Strategy; and the latter in “Agreeing to Disagree: Harsanyi and Aumann,” in a 1998 volume on Game Theory, Experience, Rationality.
Copyright ©2009 by the
President and Fellows of Harvard College.