Photo by Lori Greig
Losing Privacy and Living the Sound Bite Life
Ergo (forthcoming).
pre-print
The costs of privacy losses don’t only come from what others know about us, but also what they don’t know. Living with limited privacy can involve bits and pieces of our lives being observed in isolation: surveillance algorithms may only call attention to activities with certain features, social media followers may scroll past half of our posts, and no observer is likely to experience the full context of our words and actions. And distinctive structural features of being under observation make it difficult to correct misconceptions observers form when they lack context. So, losing privacy can create pressure for us to lead more fragmented lives, lives that seem okay when encountered as isolated sound bites, rather than richer, coherent wholes. But many of the especially meaningful aspects of our lives—lasting interpersonal connections, long-term goals and projects, and personal growth and change—are extended over longer periods of time and woven into our lives in more complicated ways. Focusing on producing good fragments can undermine features of our lives that we usually recognize as meaningful. In this sense, part of privacy’s value is underappreciated: privacy can help us to lead more meaningful lives—or at least reduce one obstacle to it.
Mattering That It’s You
Philosophy (forthcoming) (invited contribution to special issue in honor of Sam Scheffler)
pre-print
To live meaningfully, we can’t just be receptacles for the right sorts of activities – it has to matter that it’s usliving our lives. Something is missing in valuable activities, if the same value could be achieved by anyone who performs the task. Meaningfulness requires that it be our own ideals, personalities, and priorities contributing to the value of what we do. Recognizing this can shed light on our relationship with meaning in three ways. First, it shows a distinctive reason that autonomy is important: what we do without autonomy will lack meaning. Second, it helps us understand a challenge we encounter when facing trade-offs between different types of meaning, navigating between opportunities to have a few of our characteristics matter widely (e.g., as a filmmaker or an activist) and intimate contexts in which much more of who we are matters to a small group of people. Finally, if living meaningfully involves our central characteristics shaping what’s valuable about our actions, then discovering pre-set purposes (e.g., from fate, God, or the cosmos) might actually undermine our capacity to live meaningful lives.
Why Desperate Times (But Only Desperate Times) Call for Consequentialism
Oxford Studies in Normative Ethics, Vol. 8, ed. Mark Timmons (2018).
final version (please cite to this version)
pre-print
People often think there are moral duties that hold irrespective of the consequences, until those consequences exceed some threshold level – that we shouldn’t kill innocent people in order to produce the best consequences, for example, except when those consequences involve saving millions of lives. This view is known as “threshold deontology.” While clearly controversial, threshold deontology has significant appeal. But it has proven quite difficult to provide a non-ad hoc justification for it. This chapter develops a new justification, showing that acting like a threshold deontologist is a good strategy for being moral, given our uncertainty and imperfect moral knowledge. And failing to use good strategies for being moral is, itself, morally bad.
What Decision Theory Can’t Tell Us About Moral Uncertainty
Philosophical Studies 178: 3085–3105 (2021).
final version (please cite to this version)
pre-print
We’re often unsure what morality requires, but we need to act anyway. There is a growing philosophical literature on how to navigate moral uncertainty. But much of it asks how to rationally pursue the goal of acting morally, using decision-theoretic models to address that question. I argue that using these popular approaches leaves some central and pressing questions about moral uncertainty unaddressed. To help us make sense of experiences of moral uncertainty, we should shift away from focusing on what it’s rationalto do when facing moral uncertainty, and instead look directly at what it’s moral to do about moral uncertainty—for example, how risk averse we morally ought to be, or which personal sacrifices we’re morally obligated to make in order to reduce our risk of moral wrongdoing. And orthodox, expectation-maximizing, decision-theoretic models aren’t well-suited to this task—in part because they presuppose the answers to some important moral questions. For example, if approaching moral uncertainty in a moral way requires us to “maximize expected moral rightness,” that’s, itself, a contentious claim about the demands of morality—one that requires significant moral argument, and that I ultimately suggest is mistaken. Of course, it’s possible to opt, instead, for a variety of alternative decision-theoretic models. But, in order to choose between proposed decision-theoretic models, and select one that is well-suited to handling these cases, we first would need to settle more foundational, moral questions—about, for example, what we should be willing to give up in order to reduce the risk that we’re acting wrongly. Decision theory may be able to formalize the conclusions of these deliberations, but it is not a substitute for them, and it won’t be able to settle the right answers in advance. For now, when we discuss moral uncertainty, we need to wade directly into moral debate, without the aid of decision theory’s formalism.
An Ethically Risky Profession?
Washington University Jurisprudence Review (2023)
final version (no log-in required)
Traditional accounts of legal ethics put lawyers in a difficult position. They require lawyers to take on surprisingly high risks of wrongdoing when they navigate the narrow passage between differing demands of legal ethics. And this occurs even if we accept the standard conception of legal ethics on its own terms while disregarding possible clashes with external norms. According to the standard conception of lawyers’ ethical responsibilities, lawyers should “zealously” advocate for their clients—further their clients’ interests right up to the limits of the law. Under this view, a lawyer might violate ethical obligations to their client if they decline to take effective, legally permissible steps out of moral squeamishness or because they’re inclined to be generous with adversaries. But these same traditional views take lawyers to be ethically obligated to not violate the law. On the standard conception, lawyers must approach that line for their clients, but not cross it. And both a lawyer’s duties to the client and their duties to obey the law are treated as moral obligations. According to standard accounts of professional ethics, when a lawyer falls short of these duties, it is a moral failing.
Navigating these two requirements—finding the line but not crossing it—seems to require a very high level of clarity about both the law and the legally relevant facts. And that clarity may be difficult to achieve in ordinary legal practice. Real life involves uncertainty about the law and legally relevant facts, often with too little time to resolve it. But the structure of lawyers’ obligations means that when they face uncertainty, they may not be able to use one approach that’s common in day-to-day life: avoid a questionable activity just in case it’s morally wrong and opt for an alternative that’s clearly morally permissible. For a lawyer, there may not be an alternative that is clearly morally permissible. Instead, this avoidance strategy would often mean a lawyer either errs in the direction of violating an obligation to a client or errs in the direction of violating obligations to the law. If standard accounts of lawyers’ ethical obligations are correct, then we are asking lawyers to assume an unusually high risk of moral wrongdoing. And it may be unfair to lawyers to expect them to take this on. In other areas, our legal system provides standards for what it takes to move from one way of handling an ambiguous circumstance to another—“proof beyond a reasonable doubt” or a “preponderance of the evidence,” for example. These are messy and imperfect. But lawyers are left with even less guidance—the equivalent of telling a jury to “be sure to get it right” and saying very little about what to do when it’s unclear what “right” involves.
Response to Adam Kolber’s ‘Punishment and Moral Risk’ (Invited Commentary)
University of Illinois Law Review Online, Vol. 2018, no. 2 (2018): 175-183.
final version (no log-in required)
Adam Kolber argues against retributivist theories of punishment, based on considerations of moral uncertainty. In this reply, I suggest that Kolber’s argument will not have the implications he supposes, in part because, if it’s able to raise difficulties for retributivism, similar problems will arise for a wide variety of other approaches to punishment.
Ethics for Fallible People (Dissertation)
extended dissertation summary
(for a copy of the full text, please email me)
Our moral judgments are fallible, and we’re often uncertain what morality requires. I argue that, in the face of these challenges, it’s not only rational to use effective procedures for trying to be moral – we have a moral responsibility to do so, and being reckless when navigating moral uncertainty, is, itself, a form of moral wrongdoing. These strategic requirements present a large class of under-explored norms of morality. I use these norms to address moral and social questions concerning, for example, interpersonal toleration, exceptions to moral rules in high-stakes cases, and principal-agent relationships (such as those between lawyers and clients).
The Right to Explain
(for a draft, please email me)
Increasingly important decisions about our legal and financial fates are being made by algorithms. But, recent work raises ethical worries about algorithms’ potential biases and lack of transparency. In this paper, I identify and examine a different type of problem facing algorithmic decision-making—not that it violates our right to receive a more transparent explanation (though I think it often does), but that it can sometimes violate our right to give an explanation to decision- makers: our “right to explain” ourselves when high-stakes decisions are being made about us. Collecting a great deal of data from someone is different from giving them a chance to provide an explanatory narrative showing how that information fits together or to make a case that there’s other information that’s relevant and explanatory that wasn’t asked for. I argue we have a right to explain ourselves in this way that is incompatible with algorithmic decision-making.
Intimate Concepts
We use concepts, like city or money, to help us understand and talk about the world. But some concepts also help us understand ourselves and navigate our intimate relationships – concepts like lesbian, having sex, genderqueer, and love. These concepts raise distinctive challenges. On the one hand, we want to understand our own, widely varied, personal experiences; on the other hand, we want to be able to communicate with the wider world. The concepts that will do the best job at the first task – improved self-understanding – likely won’t be the same person to person. The conceptual frameworks that may be illuminating for one person may just be stifling or confusing for another, and maximally inclusive, general concepts might not have enough content to do the needed hermeneutical work. But piles of very specific, bespoke concepts create challenges for the second task – communicating with the wider world. For that purpose, the general concepts can have advantages – effective concepts might seem to be those that are widely understood in fairly uniform ways. So, we ask intimate concepts to perform two very different jobs – and the requirements of those jobs suggest very different ways of developing our conceptual frameworks. I explore this dilemma, look at what it can tell us about when to trust people’s claims about themselves, and suggest some ways forward.
Trying to Be Moral, Morally
(for a draft, please email me)
Often, we’re unsure what morality requires, and a debate has emerged about whether that moral uncertainty matters to what we should do. I argue that (1) navigating moral uncertainty recklessly is, itself, morally wrong (not only irrational); (2) many objections to this view are only problems if we assume all moral norms are sensitive to our moral uncertainty; and finally, (3) we should avoid this with an account that incorporates procedural norms that are sensitive to our moral uncertainty (analogous to legal due process norms) and substantive norms that are not sensitive to them (analogous to norms of substantive justice).
Disappearing Moral Responsibilities: A Problem for the Ethics of Principal-Agent Relationships
(for a draft, please email me)
When we make decisions under ordinary circumstances, we are responsible for making morally good decisions -- for ensuring that our choices don’t impermissibly harm others, for example, or for making decisions that fulfill imperfect duties of kindness or generosity. But sometimes we delegate our decision-making to others, perhaps asking financial professionals to plan our investments, or authorizing a lawyer to navigate a legal dispute in our stead. It’s commonplace to delegate decisions in principal-agent relationships like these, but I argue that standard norms surrounding this delegation generate under-appreciated moral problems: they suggest that sometimes the principal’s responsibilities for making morally good decisions simply disappear when the decision-making power is assigned to an agent, neither being retained by the principal, nor transferred to the agent making the decisions. This raises problems for standard approaches to principal-agent ethics, especially given the morally significant – and sometimes morally problematic – decisions that agents make on our behalves. I introduce some of the difficulties that flow from this structure, including the way that it leads ordinary people to provide material support to terrible practices, and removes opportunities for discretionary kindness. I close the paper by beginning to sketch approaches that may be fruitful in addressing these problems.
The Polarizing Effect of an Audience
The polarizing effects of social media have generated a great deal of concern over the past several years. But much of that discussion has focused specifically on the ways in which social media can lead us to only see or trust ideas that we are already sympathetic to—because of epistemic bubbles and echo chambers. This can pose serious difficulties. But here I want to call attention to a different—and neglected—mechanism through which social media can have a polarizing effect: having political discussions in front of an audience (as on social media) makes it risky to acknowledge nuances and complications or to recognize the strengths of our opponents’ positions in the way that might be necessary to build trust across political gulfs. And if this is disincentivized, that dynamic can, itself, have a polarizing effect. This poses a particular challenge because the presence of an audience isn’t just a feature of certain ways of organizing content on social media—it’s part of what makes something social media at all.
Why Statistical Evidence Puts Us in a Tough Spot
Purely statistical evidence raises a puzzle that’s both philosophical and practical. Even when it is no less (and sometimes a great deal more) reliable than other evidence, it sometimes seems that we shouldn’t rely on it. I argue that one problem with statistical evidence is that it puts the subject of that evidence (say, the defendant in a legal case or the subject of a decision made by a predictive algorithm) in a more difficult position than evidence that is based directly on information about the subject and their life. In the latter case, they may be able to address that evidence with undercutting defeaters. But this often won’t be possible when the evidence has little to do with them as an individual, and they may have little recourse but to come up with a rebutting defeater—a much more demanding task. As a result, we should be hesitant to rely on statistical evidence in contexts when it would be unreasonable to require that (e.g., because doing so would violate at least the spirit of the criminal justice system’s presumption of innocence).
Speech and Social Media: Making the Problem Easier
The structure of social media platforms makes debates about content moderation harder than they have to be. We argue about appropriate content restrictions for posts, in general, as though there’s one appropriate answer across a wide variety of contexts. This is a mistake. Restricting comments in a family’s 10-person Facebook group can seem analogous to regulating the content of baby album captions. But restricting the content of public posts that are widely suggested by a platform’s algorithms can sometimes seem more like being selective in choosing books for Oprah’s Book Club. There are thorny, ethical questions involved in deciding how to handle these different cases, but it would be surprising if the right answers were the same. Platform design and content restrictions need to better distinguish these different contexts. Trying to develop one set of rules for very different contexts generates unfairness and intractable debates.
Rights in an Uncertain World
Rights theorists disagree about whether positive, socio-economic rights are importantly different from other rights. Some argue that these rights aren’t meaningfully distinct. Others suggest that they are different in ways that makes them lower-priority, or that leave corresponding duties absent or discretionary – if they recognize positive, socio-economic rights at all. I suggest that there is something importantly different about the relationship between rights and duties in these cases, but not that makes them lower-priority or illusory.
Often, the relevant duties are characterized as imperfect, or discretionary in their particulars (duties of charity are frequently seen as imperfect in this way, for example). But I argue that many “imperfect” duties are better understood, not as cases when duty-bearers are entitled to discretion in choosing what to do, but rather as cases where there is a fact of the matter about our duties – while rights-holders sometimes need to defer to duty-bearers’ identification of their own duties. What’s distinctive about these cases is that the content of our duties will typically depend on features that aren’t obvious from an isolated interaction (e.g., our resources, our past actions, our needs). In contrast, it can be fairly clear from a momentary observation that someone is violating a duty not to assault, for example. As a result, rights-bearers and bystanders may be morally obligated to engage in some deference to duty-bearer’s judgments in the first case, but not the second. Doing otherwise can run an excessive risk of wronging the duty-bearers, by forcing them to perform acts that aren’t within their duties after all. This shift from thinking of duty-bearers having discretion, to thinking of them as being entitled to deference can help to address two puzzles about rights: it can explain differences between permissible enforcement efforts by governments and permissible self-help by rights-holders (who may be in different epistemic circumstances); and it can show that when positive, socio-economic rights lack readily-identifiable, corresponding duties, this may be an epistemic problem, rather than an indication that there aren’t duties.