The Journal of Philosophy, Science & Law
Volume 10, October 4, 2010
Cognition Enhancing Drugs: Just Say Yes?
* B.A. New York
University, J.D. Brooklyn Law School
can imagine the academic moralist thinking up moral innovations and the
charismatic leader picking them up and imparting them to the masses.
- Judge Richard Posner, 1999
The design of the moral sense leaves people in all cultures
vulnerable to confusing defensible moral judgments with irrelevant passions and
- Steven Pinker, 2002
the 1980s, an earnest spokesman for the Partnership for a Drug Free America
presented our nation’s children and teens with a sizzling egg in a frying pan
and, with the voice of a stern-but-caring father, warned, “This is your brain
on drugs…Any questions?”
Questions, however, would go unanswered.
fifteen second television spot was not intended to invite curiosity or debate,
though viewers were never actually told “which” drugs it referred to. Rather,
the ad was supposed to teach an unambiguous lesson: “drugs” are bad. Hardly an
appeal to human reason, one could even recognize in it a tinge of fear
mongering and misinformation.
course, in the wake of the crack epidemic, it is perhaps understandable that we
Americans instructed our youths to “just say no” to all drugs, period, or else! It seemed self-evident that an advertisement presenting detailed facts about
the neurobiology of addiction as it varies from substance to substance would
have made little impact at all on children and teens,
and, ultimately, it is a banal
observation that many well-regarded anti-drug campaigns have sought to dissuade
potential users by depicting use as “deforming” and portraying users as
disgusting failures or social misfits.
But how do we decide when it is and when it is not okay to obscure facts and
play on emotions for the sake of paternalistic “good sense?” Could we imagine
using deceptive advertising today in the twenty-first century—on adults—to endorse changes to drug policy, or drug
use in general, when the benefits outweigh the risks? For instance, should
further research account for all potential pros and cons, could this be an
acceptable approach to engendering majoritarian support for the unrestricted
dissemination of cognition enhancing drugs?
this is a dubious proposition, but how else can scholars sell their empirical
evidence to a voting population distracted by and large by the longstanding
stigma attached to mind-altering substances?
now, neither the government nor nonprofit advocacy groups seem inclined to
embark on a “just say yes” campaign that would grossly exaggerate benefits of
certain drugs and play down risks for the sake of paternalistic persuasion, but
top scientists and scholars are quick to acknowledge that it is troublingly
difficult to sell important counterintuitive concepts to an obtuse voting
public when speaking solely in terms of cold hard facts.
explained some time ago that rhetoric must “stir the emotions of the hearer,”
“it is not enough to know what to say; we must also say it the right way.”
And contemporary advertising bears out this maxim quite nicely. Group
demonization, fear-mongering, exaggeration, and over-moralization have long been
successful tools of persuasion.
But such tactics have also been
Chomsky sounded practically enraged by the average human’s inability to
navigate past deceptive rhetoric when he wrote Media Control in 2002. The blurb on the back reads, “The issue is
whether we want to live in a free society or whether we want to live under what
amounts to a form of self-imposed totalitarianism, with the bewildered herd marginalized, directed elsewhere,
terrified, screaming patriotic slogans, fearing for their lives, and admiring
the leader who saved them from destruction.”
The subtext is that if everyone could just be rigorously rational, we would not
have to worry about deceptive persuasion tactics.
while such frustration among academics is understandable, the prevailing
research on human cognition indicates that humans will behave “irrationally” no
matter how much they’re lectured.
Unfortunately, we humans are
consistently subject to “cognitive and motivational distortions” that have a
significant effect on our judgment and choices.
And this means that adults, when
faced with certain important decisions, may need to be “nudged”
in the right direction just as much as children.
Cass Sunstein, Administrator of the White House Office of Information and
Regulatory Affairs, has said candidly that “respect for [individual] choices
rooted in incorrect judgments” might not be such a great way to “promote
utility or welfare.”
In other words, maybe we need to face the music and reconsider the prevailing
wisdom of John Stuart Mill that says, hands down, I am the best person to
decide what is best for me. Maybe the critics of deception need to change their
tune and embrace the tricks of persuasion as near-necessity.
course, when arguing from the premise that “all individuals” are fallible
decision-makers, it is tricky to suppose collective welfare could be increased
if “some individuals” help “other individuals” decide what choices to
make—(after all, how can I know what is best for you if I do not even know what
is best for me?)—and, unfortunately, the superiority of any one choice over
another cannot really be grounded in objectivity. However, often there are compelling reasons for why some
choices and ideas may be better than others,
and there are definitely instances where it might make sense to defer our
decisions to those with better resources for making informed judgments.
has not gone so far as to advocate the type of hardcore paternalism that would
actually limit options or directly coerce behavior
—rather, he has merely proposed a
pragmatic method for encouraging humans to start making better choices from among the same existing options.
He calls this method “libertarian
paternalism,” and for its substance he borrows some insight from the principles
of marketing (which are, of course, also based on the fallibility of human
put, libertarian paternalism calls for strategic engineering of the way choices
are framed or presented to people.
For instance, Sunstein has suggested putting the fruit (and not the cookies)
right next to the registers in the school lunch line.
Whereas marketers use such
strategies to push products, Sunstein suggests that the state could use this
power for “good,” so to speak.
When the “good,” to the irrational human, seems “bad,” the irrational human
needs a “nudge.”
the scientists and pharmaceutical companies have not yet concocted for us a
completely risk-free, low cost, widely available “super-pill,” we can already
find in large supply a variety of prescription drugs that the media have
likened to “brain steroids.”
These drugs can improve memory and
concentration, increase alertness, and suppress fatigue, meaning they can help
us work longer, harder, and more efficiently at solving problems. Widespread
use of such drugs in the workplace could mean increased productivity across all
industries, from manufacturing to environmental science. Thus, should further
research indicate that such drugs do not come with any problematic risks to
individuals or society as a whole, widespread availability of
cognition-enhancing drugs is perhaps one such “good” worth promoting even if it
may seem “bad” to irrational humans
at present. This is the counter-concept of the “bad” drugs that supposedly
seemed “good” to irrational children and teens in the 1980s.
goal is not, however, to persuade that cognition-enhancement is “good” or “bad”
or “moral” or “immoral.” Nor is my goal to advocate policy change. Instead,
thinking pragmatically about the ways other unconventional utilitarian
gained traction, my aim is to persuade academics and policy advocates that if their efforts to prompt change in regard
to this issue are ever to succeed, they must first take account of the human
cognitive flaws and motivational biases that may well lead a majority of the
public to reject such a prospect. I argue that, regardless how miniscule the
risks or how blatantly obvious the benefits, a majority of U.S. citizens is
unlikely to support the unrestricted dissemination of cognition enhancing
drugs, because each individual member of the majority will be led astray by
cognitive biases and illusions, as well as logical fallacies.
this premise is accurate, then the people of the United States may already be suffering an opportunity cost
that cannot be recouped. While a minority of the U.S. electorate can challenge
the constitutionality of a policy enacted by a majority, a minority cannot sue
to challenge the legislature’s refusal to enact a specific policy. In other words, we in the minority have no way of
claiming we were harmed by what “good” could
have come—but did not come—due to the legislature’s inaction. We cannot claim the “opportunity cost to the greater
good” as an injury, and we cannot compel a court to balance that opportunity
cost of inaction against the individual interests that dissuaded the majority
only recourse is to compel the majority to change its stance via persuasion.
Part I of this article, I will set out a descriptive account of human morality
and rationality in order to explain why advocacy for cognition enhancement is
met with moral indignation. Then, in Part II, I will discuss how a better
understanding of human cognitive deficits sheds new light on the best
strategies for persuading irrationally indignant opponents of utilitarian
initiatives. The method of persuasion I will present is a form of moral
entrepreneurism modeled on Cass Sunstein’s “libertarian paternalism” and
further informed by evolutionary theories of morality, as well as empirical
research on legal legitimacy, compliance, and the logic of social reciprocity.
I. The Rationality of Moral Arguments over
“unbounded” rationality would seem to require omniscience on the part of
In pursuit of any and every
goal—including those of: forming beliefs, forming plans or strategies for how to form beliefs, and, well, forming
plans or strategies for how best to form plans and strategies—we would need
unlimited access to information.
And, in order to draw perfectly valid and sound logical conclusions from that
information, we would need unlimited computational power.
But, in order to be sure our actions would be maximally optimal at all times,
we would also need to know exactly how much information to gather and how much
computation to perform before the benefit of additional information gathering
and computation would begin to outweigh the costs of gathering and computing.
Indeed, we would require omniscience.
do not possess unbounded rationality,
and the most persuasive theoretical explanation of human behavior and
decision-making seems to come from cognitive science and evolutionary
psychology, not classical economics.
These disciplines also provide a compelling explanation of the origins of human
morality and thus offer much insight as to how humans might approach the
question whether otherwise healthy individuals should ever be allowed access to
one theory puts it, the human moral sense is largely dictated by hardwired,
universal “emotions”—contempt, anger, disgust, gratitude, awe, sympathy, guilt,
shame, embarrassment, etc.—that we experience unavoidably in response to
certain circumstances and stimuli.
These emotional responses encourage
us to: help those who are in need, reward those who help us, punish those who
try to take advantage of us or cheat us, and resist the urge to become cheaters
also prompt us to formulate, internalize, and perpetuate over generations
certain norms and principles that serve as rules for social conduct.
Some of these rules vary and are
unique to the group that developed and perpetuated them, possibly as a result
of different environmental exigencies that group faces or once faced. For
instance, even though “disrespect” may universally incite “anger,” the types of
actions that constitute “disrespect” may vary by “culture.”
Some rules about social conduct are
mere tautologies (e.g., “unjustified killing is wrong”) that depend on other
rules for their content.
Some of these rules, when infused with enough content, become “laws.” All in
all, the process of “living” is really just a matter of creating and following
certain guidelines for dealing with whatever opportunities and constraints we
encounter in the process of unwittingly carrying out our
genetically-preprogrammed imperatives of survival and reproduction.
course, humans do not go around couching the concept of “human life” in such
coldly scientific terms. Yes, perhaps our genes are “selfish”
in that they program us to behave in
ways that were conducive, in the environments we evolved in, to their ultimate
goal of replication; but we humans did not and do not set this ultimate goal
ourselves, and when utilizing our hardwired strategies, we do not consciously
draw connections between gene replication and our behaviors and motivations.
Rather, we understand our behaviors and motivations solely in terms of our
individual “pleasure versus pain continuums” (so to speak), which are imaginary
linear scales we use to rate the desirability and undesirability of the
sensations we experience in various circumstances.
Our genes purposefully dictate what circumstances and stimuli we will
experience as positive and negative—but all we humans do from there is try to
experience the positive as much as possible and avoid the negative (except
where the negative is a presumptively necessary means to the positive). This
intuitive concept of “positive and negative” experiences forms the original
basis for the ethic of reciprocity (“the Golden Rule”),
as well as the original principle of hedonistic utilitarianism, a derivative of
which supplies the rationale for permissive cognitive enhancement.
of the most prominent proponents of moving “toward responsible use of
cognition-enhancing drugs by the healthy” is Stanford law and bioethics
professor Hank Greely.
In 2008, Greely and five colleagues
published an article in Nature calling
on “physicians, regulators and others” to expand research, public discourse,
and policy debate on the pros and cons of enhancement, all in the interest of
spurring eventual legislation.
Noting that productivity, quality of life, and innovation, for instance, are
Greely and colleagues argue that, if cognition enhancement could ultimately
lead to benefits for all of society,
a permissive policy on use and availability would be morally advisable from a
is of course Jeremy Bentham’s
famous consequentialist moral philosophy, which dictates (in its most common
“universal” formulation) that the “moral” outcome is that which leads to the
“greatest good for the greatest number.”
Of course, there is debate over
whose preferences should be included in the “greatest number” calculus—animals?
aliens? unborn humans?
for our purposes, let us limit our conception of the “collective” to current
it seems plausible enough that, if cognition-enhancing drugs could be deemed
safe, most U.S. citizens would be quite eager for our nation’s top scientists
and engineers to have a little extra help on the path to solving the most
difficult problems facing humanity. We all want a cure for cancer, a cure for
AIDS, an end to poverty and hunger, an energy alternative to oil, a way to save
the planet from global warming, a spacecraft that can travel beyond Pluto, and
many other agreeably desirable innovations such as faster internet access for
our phones, better video game graphics, and cheaper and easier methods of
travelling long distances.
course, the notion that such innovations could arrive sooner than later as a
result of the widespread permissive use of cognition enhancing drugs is highly
speculative; however, it is well-documented that existing enhancement drugs
improve one’s ability to work through complex tasks and problems by suppressing
fatigue and enhancing alertness, focus, concentration, and memory.
Thus, these drugs arguably do provide a means for humans to achieve important
goals at a faster pace than would otherwise be possible.
proponents say there may be plenty more potential benefits beyond the mere
prospects of increased productivity and innovation. For instance, “It has been
suggested that many people would prefer to fly with airlines or go to hospitals
where the personnel take alertness-enhancing drugs.”
And enhancement en masse could
potentially improve the quality of public discourse as well as citizens’
abilities to analyze political rhetoric critically, leading to greater
ask any citizen point blank whether she would be eager for society to reap the
potential benefits of cognition enhancement, and she will likely ask, “What’s
“catch” is just like a tradeoff—for example, if you have $5, you can buy a $5
hamburger or a $5 chicken sandwich, but you have to choose one or the other—you
cannot have both. Or, for a more relevant example, if you were asked whether
you would like to live in the safest country in the world, you would probably
say, “Well, ideally, yes.” But if living in the safest country in the world
meant you would have to be subject to twenty-four-hour surveillance, you might
decide that the “catch” is not worth the safety.
even if everyone agrees that both hamburgers and chicken sandwiches are
“good”—or that both safety and freedom are “good”—individuals are bound to
disagree as to which “good” thing is “better” in any given context. And the
extent of disagreement can vary depending on hundreds of factors. This is
perhaps why Amartya Sen has said that there can be no context-blind absolutes
or ideal principles. Justice, Sen argues, can only be discerned via “comparative
assessments between pairs of alternatives” in light of the particular contexts
in which they are being traded off.
far, the use of cognition-enhancing drugs has been anything but widely
supported or encouraged by the lay public. A review of the popular press
suggests a conflicted citizenry for whom tentative enthusiasm is drowned out by
So, what are the factors that stand to sway the people against a permissive
cognition enhancement policy?
goes without saying that safety is a chief concern; however, in this article I
am operating under the assumption that there is no debate even to be had until
any and all safety concerns can be assuaged with further research. But, even
assuming safety would not be a legitimate issue, some people will rest on non sequiturs and assert, for instance,
that humans should not be “playing God.”
Some will worry that we will become
too dependent on such drugs and will be unable to function without them.
And some, like renowned philosopher and political economist Francis Fukuyama,
will argue extensively and eloquently that such drugs might detrimentally alter
the human sense of self and individualized identity and thus undermine the
social fabric of human existence by degrading the authenticity of social
Fukuyama has said, “The original purpose of medicine is to heal the sick, not
turn healthy people into gods”
enhancement drugs might “erode the relationship between struggle and the
building of character.”
misgivings like those presented by Fukuyama are mostly conclusions derived from
a sort of balancing act—a cost-benefit analysis pitting the prospective (albeit
speculative) benefits of cognition enhancement against prospective (sometimes
also speculative) risks and costs, both to individuals and society at large.
Other types of opposition concerns, however, seem to reflect categorical
notions of moral certainty.
moral philosophers reject contextual utilitarian balancing in favor of
deontological moral rules that focus on the justness of decisions and actions
without regard to consequences, in light of categorical rights and duties that
should apply regardless of context.
Immanuel Kant is the kingpin of deontology who brought us the “categorical
imperative”—“act only according to that maxim whereby you can at the same time
will that it should become a universal law.”
Kant also urged that a human should never be used as a mere means to some other
end, as each human is an end in and of him or herself.
Basically, Kant thought it was categorically improper to infringe certain
fundamental individual rights no matter the justification for doing so, and he
believed reason alone could delineate for humans what those categorically
inviolable rights would be. Furthermore, Kant believed humans have a duty to
engage in deontological moral reasoning to determine what is “just” and to
adhere to what is “just” regardless whether doing so will maximize some
I have said, a legitimate policy debate about cognitive enhancement should not
objective risks and costs to individuals as well as society can be accounted
for. Furthermore, until the risks and costs can be deemed no more significant
than those associated with other popular enhancement methods such as caffeine
not even expect that a permissive policy would be warranted. This is why I have
chosen to focus this article narrowly on arguments sounding in deontological
“moral” opposition to cognition enhancement. My claim is that, should all risks
and costs be adequately accounted for, concerns based in reason could mostly be
quashed, but strong opposition among the lay public is likely to remain with
arguments sounding in categorical moral certitude and indignation. These types
of arguments—rooted in irrational fears, illogical contentions, and intuitive
questions of fairness—already make up much of the criticism leveled against
cognitive enhancement in the popular press and are likely to remain salient for
the lay public in the long run anyway, regardless what empirical research may
there is the “cheater concern.” The “cheater concern” is very easy to recognize
whenever a college newspaper runs a headline that asks something like, “Is
pill-popping before finals an honor code violation?”
Below the fold, one is always bound to find a parsimonious call for “drug
testing” at the next exam.
us assume for a moment that, with all risks and costs eliminated or debunked,
all concerns assuaged, a change in federal policy actually occurs. There would
then still be some inevitable stretch of time during which a certain number of
persons would be unable to acquire these drugs due to cost or insufficient
supply, and there would also be a certain number of persons who would not yet
even be aware of the existence or availability of the drugs.
Basically, we must assume some indefinite, inevitable span of inequality of
access. From there, we can also assume that, for some span of time beginning
the moment the new permissive regulatory scheme would take effect (though
perhaps not for too long thereafter), those who are barred from access would
constitute a majority of the U.S. voting public.
this were true, we might readily expect that, prior to the hypothetical regulatory change even occurring, when
the issue was still being debated, there would have been a majority of citizens
faced with the prospect that, if they were to endorse such a regulatory change,
the integrity of the institutions through which they compete for their
livelihoods—schools, businesses, the arts and sciences, etc.—would be
compromised. Enhanced individuals would have an upper hand in competition
against the not-yet-enhanced, and this would leave the latter feeling as though
the resulting disparity in ability to acquire resources would be unjust. Were this outcome foreseeable
prior to the policy change, it would likely have caused them to have rejected
the idea from the start.
whether I want the drugs but cannot get them or can get them but do not want
them (perhaps due to “irrational fears”),
am bound to exude moral indignation; I had expected to compete in a race—a race
for income, grades, promotions (valuable zero-sum resources and status indicators)—but I had a “legitimate expectation”
that my competitors would not be
running the race on steroids! In the short-term, I do not really care about the
potential fruits of the labors of enhanced individuals regardless whether I
might reap the magnificent benefits of those fruits someday without even having
contributed to them myself.
And even if I could have access to cognition enhancers, maybe I don’t want to
have to feel forced to take “drugs” just to keep up—drugs are unnatural! I’m irrationally afraid of all drugs! Yes—it is imperative that I
make clear to everyone that I will not stand by and suffer
in the short-term! I will not allow myself to be a mere pawn for the greater
this sound like the makings of a legitimate moral claim? John Rawls might
advise the hypothetical “me” to step behind the “veil of ignorance” and think
things over a little longer.
In an attempt to improve upon Kant’s deontological framework, Rawls suggested
that each of us should imagine a sort of pre-life purgatory—a hypothetical
place you must hypothetically visit in order to try to imagine that you are not
yet born and that you have no idea where you will end up once you are born.
(You could end up an orphan in a third-world country or an heiress to a global
hotel chain.) Then, you must decide what principles and institutions would be
“fair” in light of the possibility that you might indeed turn out to be a
member of a brutally oppressed minority that the majority would like to
far, the morally indignant “me” who does not want to be “exploited” still
sounds justified under this deontological framework. But Rawls assumed that,
behind this “veil of ignorance,” we would accept unequal distributions of resources under one condition—the inequality
would have to somehow work to the benefit of those who end up worst-off.
Inequality in achievement and perhaps income for the sake of innovation sounds
to me like it passes this test (i.e., it satisfies the Rawlsian “difference
it is not worth splitting hairs at the moment over whether Rawls would agree
that permissive cognition enhancement has a sufficient probability of
benefitting the worst-off. The more important point to glean comes from critics
of Rawls who note that since the difference principle treats the fruits of
abilities (and, thus, the abilities themselves) as common assets, Rawls was
indeed making Kantianism more practical—by inadvertently admitting
utilitarianism cannot be flatly ignored!
real reason deontologists cannot properly rid the world of utilitarianism is
that principles of categorical and consequential
morality both come naturally to humans at different instances and under
Harvard psychology professor Joshua Greene has set out a “dual-process theory
of moral judgment” that suggests deontological judgments are driven by our
automatic emotional responses while utilitarian judgments are driven by more
“controlled cognitive processes” in other parts of the brain.
“Nearly everyone is a utilitarian to some extent.”
has used functional magnetic resonance imaging (“fMRI”) to examine what is
really going on in the brains of humans when presented with the famous “trolley
problem” of moral philosophy.
The “trolley problem”
goes like this: a runaway trolley is about to kill five people standing on a
set of tracks unless you flip a
switch to shift the trolley onto another set of tracks where only one person is
standing—do you do it? A majority say yes. Then there is a second scenario: the
trolley is about to kill five people standing on the tracks unless you shove a
fat man (who happens to be standing next to you) onto the tracks—do you do it?
A majority say no.
to Greene, the emotional responses exhibited at the prospect of actually
shoving the fat man are drastically stronger than those exhibited at the
prospect of the detached and impersonal flipping of a switch.
Cognitive science provides a coherent explanation for the difference in
responses and the contradictory reasoning of the first and second majorities in
regard to the trolley problem. The same principles that underlie this
explanation also predict that moral indignation arising from “irrational fears”
and/or the “cheater concern” would prevent a majority of citizens from
endorsing the utilitarian justifications for permissive cognition enhancement.
The following section will illuminate these underlying principles.
humans associate in groups with well-aligned interests and well-settled and
well-defined norms, as we have for most of our evolutionary history, it is
common for group members, when confronted with outsiders who adhere to
different norms and possess different interests, to wonder, “What on earth is
wrong with them!?”
the value-free objectively descriptive term “different” is often immediately
translated to the subjective value judgment: “wrong” in the human mind, so that
out-group characteristics can be subordinated to in-group rules and
preferences, which take on a quality of moral truth.
follows below is a descriptive account of the human biases, cognitive
illusions, and logical fallacies that cause humans to imbue such fallacious
translations with moral certitude.
criminal law scholar Stephen J. Morse has said, “Understanding the lawful
regularities of human behavior might reveal what is possible for human beings
and what is not, but such understanding cannot dictate what morals, politics,
and laws we ought to adopt.”
an important concept to bear in mind whenever questions of social policy are
colored by questions of morality, so I will begin this section with a
discussion of the “naturalistic fallacy” and the “is/ought” problem—two
reminders from meta-ethics that there is a significant difference between
“description” and “prescription.”
“is/ought” problem is David Hume’s famous kernel of wisdom that tells us: just
because something “is” the way it “is” does not mean that it “ought to be” that
and the “naturalistic fallacy” is G.E. Moore’s very similar reminder that just
because something is “natural” does not mean it is inherently “good” or
Arguments that conflate the “natural” with the “morally good” might be
illustrations of the type of (unsound) post
hoc rationalization that humans produce after coming to a moral judgment
automatically via emotional response—in other words, some “irrational”
arguments may just be proxies for the automatically induced moral indignation
that arises from the perception that one is being taken advantage of.
But some researchers believe we might be able to group the moral emotions into
“spheres” that relate specifically to the interests they were hardwired to
serve—we have emotions that (1) condemn others (contempt, anger, disgust); (2)
make us self-conscious (shame, embarrassment, guilt); (3) make us feel for
others (compassion, empathy, sympathy); and (4) make us praise others (awe,
gratitude, elevation). In other words, we might think of our moral judgments in
terms of whether they pertain to our broader interests regarding, “self,”
“community,” or the “natural world.”
The sphere of “self” governs moral
judgments about one’s own rights and interests (here is where we would find the
“Community” pertains to “social mores like duty, respect, adherence to
convention, and deference to a hierarchy.”
And the “nature” sphere (also known as the “divinity” sphere) pits “purity”
So, it is possible that violations of the naturalistic fallacy result from an
evolutionary imperative to err on the side of caution when encountering the
“potentially contaminated” or “impure.”
any event, “the moral sense leaves people in all cultures vulnerable to
confusing defensible moral judgments with irrelevant passions and prejudices.”
Indeed, in addition to a bias for the “natural,” many arguments are rooted in
bias for the “status quo” and evince myopia when subjected to the lens of
Empirical research demonstrates that we humans are generally terrible at
reasoning about probability and are actually unable in many instances to
predict what will make us happy or unhappy in the future.
It turns out that we are so incredibly averse to loss (and hilariously inclined
to disobey classical economic assumptions) that, in a laboratory experiment,
given the choice between receiving a $6 coffee mug or some cash (in an amount
we agree would make us indifferent between receiving the cash or the mug), we
are satisfied, on average, with $3.50—but then, if we are instead told to
imagine that we already own the $6 mug and are asked to determine the minimum
amount of cash we would sell it for, we demand, on average, $7.12.
inevitably “loom larger than corresponding gains,”
even when we are to lose something that we never paid for and only owned
hypothetically for a few seconds. And the future looms—well—how large or small
do you think? Sometimes we are willing to discount the future as if we will die
any moment—carpe diem!
This is why we buy energy inefficient appliances
and forego the grueling nightmare of a few minutes of 401k enrollment
simple point here,” says Oxford philosophy professor Nick Bostrom, “is that our
judgments about such matters are not based exclusively on hard evidence or
rigorous statistical inference” but on mental shortcuts and intuitive
Bostrom, who fights from the same corner as Hank Greely on the issue of
enhancement, has suggested that status quo bias plays a tremendous role in people’s
judgments on the issue.
Changes from the
status quo will typically involve both gains and losses, with the change having
good overall consequences if the gains outweigh these losses. A tendency to
overemphasize the avoidance of losses will thus favor retaining the status quo.
. . . Even though choosing the status quo may entail forfeiting certain
positive consequences, when these are represented as forfeited “gains” they are
psychologically given less weight than the “losses” that would be incurred if
the status quo were changed.
what about inconsistencies in the preferences of those who would oppose
permissive enhancement? Bostrom asks:
How is taking
Modafinil fundamentally different from imbibing a good cup of tea? How is
either morally different from getting a full night’s sleep? Are not shoes a
kind of foot enhancement, clothes an enhancement of our skin? A notepad,
similarly, can be viewed as a memory enhancement—it being far from obvious how
the fact that a phone number is stored in our pocket instead of our brain is
supposed to matter once we abstract from contingent factors such as cost and
convenience. In one sense, all technology can be viewed as an enhancement of
our native human capacities, enabling us to achieve certain effects that would
otherwise require more effort or be altogether beyond our power.
goes on to question the grounds upon which opponents draw distinctions between
pills and all those other types of enhancements that they are already so
well-accustomed to. Sure, distinctions can be drawn, but are they “morally
According to Bostrom, the burden is on the opponents of enhancement to provide
a rational justification if they truly believe such moral distinctions—or such
Meanwhile, Jeffrey J. Rachlinski points out that if you thought these biases
and illusory distinctions were bad enough, add in the fact that people are
overly optimistic and have self-serving conceptions of how well they do everything, and it is even less
surprising that “individuals seem to accept willingly risks associated with
activities that they voluntarily undertake” but then “overreact to risks that
are involuntarily imposed on them.”
what is the root, underlying cause of these illusions, biases, and motivational
distortions? Well, “humans evolved in natural environments, both social and
physical,” and are born pre-programmed with the goals of survival and
achieve those goals, we have been forever faced with the choice of either
adapting to our environments, or, changing them. The decision-making
fallibility we display today lives on because, in the environments we evolved
in, we mostly needed “ecologically
of thumb to guide our decisions—we needed to solve problems quickly and with
little information, thus, we needed “good” solutions, but we did not
necessarily need the “best” or most “optimal” solutions.
In fact, optimal solutions were likely impossible or counterproductive given
the exigencies and constraints of our evolutionary environments.
we engage in much collective decision-making and often have the time and
resources to deliberate extensively in pursuit of optimal decisions, using the
content-blind norms of logic. The “fast and frugal” “heuristics”—or, “rules of
thumb”—of our evolutionary past hastened our decision-making processes
effectively, but today’s industrialized world is drastically different from the
environment we evolved in, and many of our heuristics have become
counterproductive to our new environment-specific goals. For instance, the
“shortcuts” that in the past thankfully guided us to quick decisions to “do
what the group is doing” when the group was running from potential attack now
merely steer us toward the boxers Michael Jordan endorses and the norms adhered
to by the kids at the cool lunch table.
next question is then: how did we get so smart? “Technology” is defined as
“practical application of knowledge,” giving rise to new capabilities.
And it is the rigorously rational scientific method—not mental shortcuts—that
allow us to test the “truth” of propositions and, thus, acquire knowledge.
Indeed, the scientific method is sure to be the root cause of every
“improvement” to what we conceptualize as human “standard of living”—and
“intelligence” is best defined as “the ability to attain goals in the face of
obstacles by means of decisions based on rational
So how did we go from heuristics to the level of rigorous rationality that
allowed us to fly to the moon?
theory is that intelligence is the product of a “cognitive arms race.”
To understand this concept, it is
helpful to remember that human life is inadvertently guided by genetic
programming. But, while we are guided to achieve many goals that appear to be
ends in and of themselves, these goals are really just means to our ultimate
human ends of survival and gene replication.
There are three motives that are essentially direct sub-goals to our genetic
ends, each demonstrably universal among the species: the acquisition of
the acquisition of status,
and preference for kin.
And each of these sub-goals entails sub-sub-goals. The recognition of
“naturally” motivated sub-goals and sub-sub-goals provides testable predictions
about human cognition and behavior (especially morality and decision-making)
and contributes to the emerging theoretical understanding of human existence
through the lens of evolutionary science.
first step is to reverse-engineer our motives in light of our ultimate genetic
most basic goal is the acquisition of resources, as we need resources to
survive, and our genes program us to want everyone who shares our genes to have
the resources necessary to survive as well. But resources are rivalrous, so we
must compete with the rest of the species to acquire them. Competition is not
the rule, however, and very often, we can all gain much more by cooperating
than by pursuing our interests individually. But cooperation is a “tit for tat”
enterprise, and no one wants to
incur all the cost and receive none of the benefit, so someone who believes her
willingness to cooperate is being taken advantage of will want to punish the
“cheater” and send a signal to alert everyone (including other would-be
cheaters) that she will not tolerate such injustice.
Sometimes, though, she will be the
one who wants to cheat, simply because she will be able to gain more by taking
advantage of another person than by cooperating. Sometimes she will. But, most importantly,
she—well, all of us—will want to establish reputational credibility—we will want others to believe: (1) that we are willing to
cooperate fairly and are willing even to act altruistically toward
non-relatives but will not tolerate being taken advantage of and will punish
those who attempt to take advantage of us; and (2) that we are good at
acquiring necessary resources and/or have already acquired a lot of resources
and possess “good genes” (if “good genes” are defined in terms of, say,
healthiness and ability to acquire
here, it is easy to conceptualize the natural selection of intelligence as a
“cognitive arms race”:
cheating when the altruist will not find out or when she will not break off her
altruism if she does find out. That leads to [people becoming] better
cheater-detectors, which leads to more subtle cheating, which leads to
detectors for more subtle cheating, which leads to tactics to get away with
subtle cheating without being detected by the subtle-cheater-detectors, and so
Trivers and many others have explained the emotions as “strategies in the game
The emotions of “love” or “liking,” they say, make us willing to help those who
seem willing to help us.
“We like people who are nice to us, and we are nice to people whom we like.”
“Anger” protects us when our niceness has been exploited. “Gratitude” makes us
want to return favors.
“Sympathy” makes us want to help those in need, or, if we would like to be
cynical, it helps us “purchase gratitude.”
“Guilt” warns us we might be caught cheating, and, once we are caught, “shame” encourages us to try to repair the damage “with
a public display of contrition.”
the game of “cheater vs. cheater-detector,” “the search for signs of
trustworthiness makes us into mind readers, alert for any twitch or
inconsistency that betrays a sham emotion.”
The economist Robert Frank has pointed out, “The key to the survival of
cooperators . . . is for them to devise some means of identifying one another,
thereby to interact selectively and avoid exploitation.”
This important endeavor is an intellectual one in that the detection of
inconsistencies requires us to take a range of information (i.e., another human’s
words, actions, or facial expressions) and subject it to deductive logical
analysis. One tremendously interesting experiment has shown that humans use
logic effortlessly in the context of cheater-detection but are thrown
completely off track by identical logical reasoning tasks that lack a moral
dimension. Leda Cosmides and John Tooby have illustrated this finding as
Imagine that part of
your new job for the City of Cambridge is to study the demographics of
transportation. You just read a report on the habits of Cambridge residents
that says: “If a person goes into Boston, then that person takes the subway.”
The cards below have
information about Cambridge residents. Each card represents one person. One
side of a card tells where a person went, and the other side of the card tells
how that person got there. Indicate only those card(s) you definitely need to
turn over to see if any of these people violate this rule.
From a logical point
of view, the rule has been violated whenever someone goes to Boston without
taking the subway. Hence, the logically correct answer is to turn over the
“Boston” card (to see if this person took the subway) and the “cab” card (to
see if the person taking the cab went to Boston)…
In general, fewer
than 25% of subjects spontaneously make this response. Moreover, even formal
training in logical reasoning does little to boost performance on descriptive
rules of this kind…
. . . .
the content of a problem asks subjects to look for cheaters in a social
exchange . . . subjects experience the problem as simple to solve, and their
performance jumps dramatically.
try this one. Imagine that you are the bouncer in a nightclub (where patrons
must be 18 to enter but 21 to drink). You arrived to work late so you did not
check IDs at the door and no one handed out wristbands to those who are over
21, so now you must walk around and enforce the rule: “If a person is drinking
alcohol, s/he must be 21 or older.” You may check either what a person is
drinking or how old that person is. Of the following four people, you can see
what two are drinking but do not know their ages, and you know the ages of the
other two but are not sure if their drinks contain alcohol. Who do you have to
people know right away that they must check the beer drinker and the
this experiment may indicate, rationality was selected for the benefits it
confers upon us in our unknowing pursuit of our ultimate human ends.
But a corollary to this theory is that, if there are instances when
irrationality remains more beneficial to those same ends, then our sheer
potential for rationality in some instances does not imply that we will be
capable of it in all instances.
Consider another experiment, known as the “ultimatum game.”
One participant (player one) is given $10 and told that he must divide the $10
between himself and another participant (player two)—if player two accepts the
amount player one offers him, they both get to keep the money. If player two
rejects the offer, neither gets any money.
to classical rational choice theory, a “rational player one” would offer a
penny to player two and a “rational player two” would accept a penny because a
penny is more than player two had at the start of the experiment.
But the data show that the average player one offers around one-third of the
money, and over 25% of people in the role of player one propose a fifty-fifty
split, whereas only 11.8% try to keep more than 90%.
In other words, player one wants to seem fair or cooperative, or he worries he
will be punished for seeming unfair or uncooperative. At the same time, five
out of six people in the player two role who were offered one dollar or less
rejected the offer, and six who were offered more than one dollar rejected the
offer!—presumably to punish what they perceived as uncooperative behavior
and/or deter player one from trying to take advantage of them in the future.
we humans indeed spent most of our evolutionary history living in small social
groups and interacting with the same individuals day-in and day-out, it is
understandable that we do not distinguish one-shot interactions from iterated
games. The ultimatum game is a one-time-deal—the subjects go to the laboratory
and participate in the experiment once and leave. If this were a game to be
played over and over between the same two participants (i.e., if it were
“iterated”), it would make sense, in the first round, for player two to
“irrationally” refuse to accept a low offer (whereas classical economics
predicts acceptance of any value).
If the game has to be played, say, 50 times, player two can keep rejecting
offers (meaning neither will get any money) until player one’s offers go up
enough to be to his liking. Meanwhile, player one might want to make an
“irrationally” generous offer right from the start, or else player two might
spread the word around the village that player one was not a very nice guy to
psychology of moral indignation is complex, but there is much reason to believe
that if permissive cognitive enhancement is not made to seem fair—or its
benefits made to seem tangible—to those who perceive that it would do injustice
to their individual interests, then those individuals will see such a policy as
equivalent to being offered a penny from player one in the ultimatum game.
Furthermore, they will potentially see the policy’s advocates as the arbitrary
and undeserving beneficiaries of the other $9.99. Meanwhile, given the
self-serving biases—(which, by the way, were arguably selected for because we
are better at convincing others of our trustworthiness or sincerity if we have
first convinced ourselves)
—we can expect opponents of
permissive enhancement to be resistant to rational arguments that attempt to
persuade them of their own irrationality, as they will be overly-optimistic
that their perspective is indeed the “just” perspective, if not the “rational”
one as well.
Greely has called on academics, scientists, and, well, whoever else reads
academic and scientific journals, to lead us on the path toward increased
“public understanding of cognitive enhancement.”
Greely assumes “new laws and regulatory agencies” will not be
necessary—(existing laws could simply be adjusted)—and he seems to be
suggesting that regulatory agencies should go right ahead and start allowing
pharmaceutical companies “to market cognition enhancing drugs to healthy
“education” that would lead to “public understanding” would be provided by
physicians, teachers, college health centers and employers, “similar to the way
that information about nutrition, recreational drugs and other public-health
information is now disseminated.”
And, “ideally,” this education “would also involve discussions of different
ways of enhancing cognition, including adequate sleep, exercise and education,
and an examination of the social values and pressures that make cognitive
enhancement so attractive and even, seemingly, necessary.”
alone, however, is unlikely to bring about the public’s acceptance. The
dissemination of cold, hard facts can rarely change or override the powerful
and automatic emotional responses people are likely to experience—only stronger
emotional responses can do so.
Proponents would thus need to go further than merely spelling out the “rational
arguments” or the “social values” that make cognition enhancement a good idea.
Everyone probably already agrees, in a vacuum, that “innovation” would be a
even assuming the Food and Drug Administration (“FDA”) regulators could be
persuaded, it could be counterproductive for pharmaceutical companies to begin
marketing cognition enhancers to the healthy prior to acquiring widespread
public support for enhancement. The fact that it was not necessary to ask first
whether the public “agreed” is not going to stop the public from voicing the
type of outrage that could put the brakes on the whole plan. Consider this 2001
excerpt from TIME Magazine:
Suddenly, stem cells
are everywhere. Once relegated to the depths of esoteric health journals, the
microscopic clusters have made their way to the nation’s front pages. The
complexity and drama surrounding these relatively simple cells has increased
due to a ticking clock: By the end of the month, President Bush is scheduled to
decide whether to continue federal funding for stem cell research.
is right when he says cognitive enhancement is what we do when we get “adequate
sleep, exercise,” go to school, drink coffee, use calculators, and so on—yet
this explanation is unlikely to persuade humans whose automatic sentiment is
that a “pill” is still somehow “different” in a “bad” way. Humans will still
sense an inarticulable difference even if they comprehend the fact that there is no such difference
(much like when they are asked whether they would push the fat man onto the
tracks to stop the trolley).
If the New York Times were to point out that there is no rational basis for
staking one’s happiness on the success or failure of a professional sports
team, would everyone slowly stop watching football? No.
this section, I argue that, for permissive use of cognition enhancing drugs by
the healthy to become a reality, a “moral entrepreneur” (acting as a
“libertarian paternalist”) would need to take up the cause energetically and
use the findings of cognitive and behavioral science to the same ends as the
Partnership for a Drug-Free America did when they took up the utilitarian goal
of preventing drug use.
know better than anyone to watch what they say. The wrong gesture—the wrong
whisper—replayed in the wrong context, can be a media disaster. President
Barack Obama called Kanye West a “jackass” prior to a CNBC interview, thinking
he was speaking off the record, and then, as laughter erupted among the
reporters present, he pleaded with them to keep the joke under wraps,
“[Because], I remember last time,” he said, “there was the fly thing.”
Obama was referring to a fly he killed while in the middle of giving a press
conference—after which, the media went on and on about the President’s
fly-killing prowess. The CNBC interviewer responded, laughing, “No, that worked
out well for you—you were a ninja.” To which Obama replied (just before the
audio cuts out), “Except, PETA…”
the People for the Ethical Treatment of Animals (“PETA”) called the incident an
was distracting the President during a speech—resulting in a morally
catastrophic murder. "We believe that people, where they can be
compassionate, should be, for all animals," a PETA spokesman explained.
the Partnership for a Drug-Free America, PETA is a nonprofit group devoted to
spreading an immodest and authoritative moral message: animals belong in our
utilitarian calculus too. And, like the Partnership for a Drug-Free America,
PETA does not simply try to “educate the people” or merely list the
“facts”—PETA tries to steal attention any way that it can in order to elicit
strong automatic emotional responses. PETA is known for stunts like
“infiltrating fashion shows and unfurling signs that read (in one instance) ‘Gisele:
And one of PETA’s “most shocking
(and perhaps [most] effective) ploys has been to display on its website actual
footage of dogs being abused and slaughtered in China.”
PETA has also tried to recruit celebrity endorsements. When a magazine reported
that Ben Affleck had purchased a chinchilla coat for Jennifer Lopez, PETA sent
Affleck a letter (accompanied by an actual video) showing the process by which
nearly one hundred chinchillas were killed to make the garment:
The preferred method
of killing chinchillas is by genital electrocution: a method whereby the
handler attaches an alligator clamp to the animal’s ear and another to her
genitalia and flips a switch, sending a jolt of electricity through her skin
down the length of her body. The electrical current causes unbearable muscle
pain, at the same time working as a paralyzing agent, preventing the animal
from screaming or fighting.
wrote back to say his eyes had been opened and he will never again be a part of
such cruelty and barbarism.
Tally another convert for the group that boasts millions in donations, millions
of members, and millions of hits on its website.
then, which is more likely to come to pass: the entire world turning
vegan—i.e., citizens of the United States ceasing consumption of all animal
products, including eggs and milk—or the United States amending its regulations
for the permissive use of cognition enhancing drugs by the healthy? At present,
there is no way to know; there has not yet been a poll taken among a
sufficiently random sampling of the average lay public in regard to cognitive
enhancement. And of PETA’s million-plus supporters, many may join up or donate
for the sake of dogs and cats treated inhumanely abroad while still having no
intention of foregoing their burgers. But even without polling, it may still
make sense to assume PETA’s extreme goal of animal equality is much farther
along on the road toward public support. Even if most of PETA’s millions don’t
ever indeed to give up their burgers, their membership or their endorsement of
a fragment of PETA’s cause arguably indicates that they’ve been made to feel
guilt or sympathy. By exposing humans to vivid images and accounts of animal
cruelty and the brutal processes by which animals are converted to dinner, PETA
has at least made it so that the millions it counts as supporters—even if they
continue to eat their burgers—will not feel self-righteous in doing so, and
will be much less likely to feel morally indignant in response to those who say
they should not be eating burgers. If anything, they will likely feel guilt. If
those whose automatic intuition is to oppose cognition enhancement were made to
feel guilty for doing so, this emotional response could potentially override
the indignation and self-righteousness they would experience and express as
opponents. It is the latter sentiments that form the bulwark of a stalwart
was founded in 1980 by a woman named Ingrid Newkirk, who was spurred to action
after reading Princeton bioethics professor (and famed utilitarian) Peter
Singer’s groundbreaking book Animal
Judge Richard Posner, who has said he finds Singer’s brand of utilitarianism
has nevertheless agreed that
shock-provoking appeals to the moral emotions are probably the best way to
persuade people to rethink a deeply ingrained belief or value.
Posner criticizes “academic moralists” who he says are naïve in believing that,
if only they can persuade people that they “ought to do something because it is
the moral thing to do, this recognition, this acceptance…will furnish a motive
to do it.”
is simply not the case, says Posner. If that “right” thing to do does not come
naturally, then one would also have
to be the type of person who obtains satisfaction from doing what he or she
believes is accepted as the “right” thing to do.
“[W]hen,” Posner asks, “was the last time a moral code was changed by rational
A “rational” moral code does not speak for itself; there is always the problem
of human motivation.
to the “academic moralist,” Posner juxtaposes the “moral entrepreneur”—the
“moral entrepreneur” is the person who actually knows how to do some
entrepreneurs are like “arbitrageurs in the securities markets. . . . They spot
the discrepancy between the existing code and the changing environment and
persuade the society to adopt a new, more adaptive code.”
They don’t do this
with arguments . . . [r]ather, they mix appeals to self-interest with emotional
appeals that bypass our rational calculating faculty and stir inarticulable
feelings of oneness with or separateness from the people…
Religions know that
to motivate people to act against or outside their normal conception of
self-interest requires carrots and sticks, rituals to build a sense of
community, habituation . . . [t]he military knows, and early Christianity knew,
that motivating people to sacrifice or to risk their lives requires psychology
to forge group loyalties and often the promise of posthumous rewards, whether
salvation or glory. You won’t get far by trying to persuade people that your
cause is, upon reflection, morally best.
seems quite right, although a moral entrepreneur’s evocation of the right
emotional responses will probably be more important for persuasion than any
promise of external incentives. After all, it is our emotions that provide our
incentives in the first place. For instance, Yale law professor Dan Kahan has
shown that, because we are “emotional and moral reciprocators who loathe being
taken advantage of” but otherwise want to “understand [ourselves] and be
understood by others as fair,” our perceptions of whether or not others are paying their taxes will be
the greatest indicator of whether we will feel bound to pay our taxes,
irrespective of any change in penalty or in the probability of being caught.
Kahan cites an experiment that tested the affects of the 1986 Tax Reform Act on
levels of compliance; the results showed no correlation between tax code compliance
and relative tax burden.
“What did shift patters of
compliance,” he explains, “were the types of interactions that individuals had
with other taxpayers in the months leading up to the reform: those who
encountered others who expressed a positive attitude toward, and commitment to
complying with, the Tax Reform Act displayed greater commitment to complying
with it themselves, whereas those who encountered others who expressed negative
attitudes displayed less commitment.”
we find one dramatic and counterintuitive conclusion for use by the moral
entrepreneur. “The mechanism for these effects,” says Kahan, “appears to be
engages in dramatic gestures to make individuals aware that the penalties for
tax evasion are being increased, it also causes individuals to infer that more
taxpayers than they thought are choosing to cheat. This distrust of one’s
neighbors triggers a reciprocal motive to evade, which dominates the greater
material incentive to comply associated with the higher-than-expected penalty.
other words, calls for “drug testing before exams,” for instance, and reports
that students are using cognition-enhancing drugs without prescriptions (illegally, that is) as study aids and for taking exams, are likely to have a strong
signaling effect—but what exactly are these reports signaling? One might assume
the common response from those who do not use enhancers illegally would be
gross indignation and feelings of contempt toward the cheaters, as well as toward
the prospect of permissive cognition enhancement in general. And this may very
well be true. However, is it possible that the indignant critics, if truly
given the opportunity, would actually then be more inclined to begin using cognition enhancers themselves rather
than to continue to protest a transgression they likely see as inevitable and
unpreventable in reality? (I mean, is it likely that drug testing before, say,
the Law School Admissions Test, will become feasible?—it would be a huge cost
for the small benefit of appeasing indignant critics who are likely to still
take the test anyway; and it would be tricky to decide what substances should
be permissible and what should not be permissible—what about Piracetam,
for instance?—and there could be a mess of constitutional questions in light of
the fact that certain public institutions require students to take the test.
in all, a keen moral entrepreneur might seize upon the fact that indignant
critics of cognition enhancement who voice the “cheater concern” may be among
the first and easiest converts to the cause. Just like those who feel like
“suckers” when they find out others have not been paying their taxes, those who
are indignant toward permissive cognition enhancement could be the first in line
ready and willing to join in on the “cheating” should it become as safe and
feasible for them as is, say, illegal downloading of music.
entrepreneurs may also attempt to exploit the well-documented effects of
“framing.” Remember, in Part I, we saw that losses loom larger than gains. Cass
Sunstein has said one solution to this irrational tendency is for the
libertarian paternalist (or, moral entrepreneur) to frame gains as losses.
Instead of arguing that “doing X would be beneficial to you and everyone else,”
the entrepreneur might explain that “you (or, we) cannot afford to not do X.”
It is true that one can get carried away with framing, but its implications
should not be understated. Sunstein says that when it comes to environmental
regulation, it would be a good idea to “insist that policymakers are trying to
‘restore’ water or air quality to its state at date X; a proposal to ‘improve’
air or water quality from date Y may ‘code’ quite differently. The restoration
time matters a great deal to people’s judgment.”
From there, one could imagine cognitive enhancement proponents claiming that
“we” must “restore” America to its “rightful”
level or productivity or restore America to its rightful place as the most
aversion” is another cognitive bias that might be used by the libertarian
paternalist turned moral entrepreneur. “People are averse to extremes,”
Sunstein explains, and “[w]hether an option is extreme depends on the stated
Furthermore, extremeness aversion “gives rise to compromise effects. As between given alternatives, people seek a
other words, an entrepreneur might explain that we are in danger of losing
ground as a powerful nation and we must be practical—no one is calling for
anything so extreme as genetic enhancement (we are not at a point where we must
install chips in human brains), but we are foolish if we do not take advantage
of the technologies we possess, which are really much more akin to those that
are already in widespread use, such as caffeine.
course, I am by no means saying that such an approach would be successful or
even desirable. These examples do not come close to forming a complete “game
plan” for proponents of cognition enhancement. Furthermore, there is a thin
line between libertarian paternalism and deleterious coercion, and it is for
others to decide where that line is—or, should be—drawn. The foregoing
hypotheticals are meant only to illustrate that, just as inherent biases,
intuitions, logical fallacies, and motivational distortions can render sound,
rational ideas implausible to humans, these same hard-wired decision-making
flaws can play a role in engendering support for such ideas.
Sunstein has said, “The concept behind libertarian paternalism is that it is
possible to maintain freedom of choice—that's libertarian—while also moving
people in directions that make their own lives a bit better—that’s paternalism.
We think it's possible to combine two reviled concepts.”
Of course, it is also an open question whether—and when—people’s lives would
even become better if the goal of engendering widespread support for cognitive
enhancement is achieved. But, what we do know is that if humans are predisposed
to self-righteous indignation in opposition—to any idea—we will never find out
what good could have come. And, ultimately, the tactics of libertarian
paternalism are no different than the type of marketing that has been used by
the Partnership for a Drug-Free America, as well as by PETA in the interest of
animal welfare. Still, I would like to further address the issue of deceit and
exploitation in the section that follows.
though everyone might want to appear fair and act cooperatively so long as
everyone else does the same, there will always be cheaters who experience
emotions like sympathy, shame, and guilt to a lesser extent than the norm. If
this were not true, there would be little need for police and criminal law.
Nicholas Wade, a New York Times science reporter and the author of The Faith Instinct: How Religion Evolved and
Why it Endures, suggests that religion has occurred “in societies at every
stage of development and in every region of the world,” ultimately because it
served as “an invisible government.”
“The ancestral human population of 50,000 years ago, to judge from living
hunter-gatherers, would have lived in small, egalitarian groups without chiefs
or headmen,” Wade explains.
It was religion, then, that ensured the people would “put their community’s
needs ahead of their own . . . [f]or fear of divine punishment.”
the enlightenment came the intellectual in-group norm of epistemic virtue.
Dogma was deemed dangerous. Arbitrary claims to authority were to be
challenged, received wisdom was to be scrutinized, and “reason” was to take
center stage as the only means to ascertaining “truth.” Human capacity for
reason brought us Immanuel Kant’s slightly misguided idealism, but it also
brought us liberal democracy. The judge moved from the heavens to the bench.
The dictates of the various religions—and the religions themselves—arguably
unnecessary in a world of constitutionally protected rights, criminal statutes,
police, checks and balances, and markets—carry on as anachronisms in the
industrialized democracies. But we are today, as a people, willing to allow
religious leaders a place in our discourse even though they prey on the
cognitive fallibility of humans to generate mass support for the oppression and
persecution of homosexuals, for instance.
and Rawls might find it improper to use the vestiges of evolutionary history
deceptively to persuade people to support anything—to
these thinkers, such means are not justified by the ends.
But Kant is now officially a vestige of an era a bit too enamored by the
peoples’ potential for rationality. And if we find it tolerable today that
religious groups can use deception to convince the masses that being gay is a
sin and that abstinence-only sex education is sound policy—if we find it
tolerable that PETA can use shock-tactics that target the “availability
garner support for animal welfare—if we find it tolerable that the same tactics
that were used by the Nazis to dehumanize the Jews were used by paternalists to
dehumanize adolescent drug users—if we find it tolerable that “going green” is
now somehow the “cool” (i.e., “moral”) thing to do—if we find it tolerable to
perpetuate the basic notion of “free will” even if the latest science causes us
to question whether we can ever truly be “responsible” for our actions
we find it tolerable that the U.S. Constitution may not have ever survived if
not for some prodding and persuading of the masses with respect to the virtues
of civic engagement and the principles and merits of constitutional government
how could we ever find it intolerable that moral entrepreneurs engage in the
marketing of utilitarian policies that stand to offer non-zero-sum returns?
is another story entirely to suggest that the government should directly
mislead its citizens. But policy advocates are mere citizens themselves, as are
moral entrepreneurs. In a sense, the “exploitation” debate raised here is
similar to the debate over physicians prescribing placebos. Whether placebos
are justifiable is an open question upon which reasonable minds may
disagree—and disagree they should, until consensus forms around the best
rational argument. There is no perfect means for deciding when “beneficent
is not justifiable; however, marketing is what it is, and those best equipped
to analyze costs and benefits critically and empirically should not hand down
their findings from a pedestal “above” the tools of marketing if they expect
their findings to have an impact on decision-making among lay people.
taken to its logical extremes, utilitarianism can give itself a bad reputation.
Debating Judge Richard Posner on the proper scope of human duty to animals,
Peter Singer admitted that if the pain a child suffers from the bite of a dog
would be less than the pain the dog suffers from human intervention to prevent
the bite, the dog must be left to bite.
To this, Judge Posner replied that there are perhaps some human intuitions that
even the most persuasive utilitarian might never be able to dissuade.
But, at the same time, utilitarianism cannot be “counted out.” Joshua Greene
has noted that “[n]early everyone agrees . . . all other things being equal,
raising someone’s level of happiness, either your own or someone else’s, is a
good thing, and . . . lowering someone’s happiness is a bad thing.”
It seems that utilitarianism and deontology will battle on, just as individual
and collective interests in general are destined to battle on. For now, we can
only hope to deal reasonably and pragmatically with the truths we are able to
discover about the nature of human existence.
this article, I have set out a descriptive account of human morality and
rationality rooted in contemporary moral psychology, cognitive science, and
evolutionary theory. I have examined our many hard-wired and learned
strategies, and I have noted that these strategies serve us well in many
circumstances, but, as Jeffrey Rachalniski says, “they also create
vulnerability to the predations of advertisers, political spin doctors, trial
attorneys, and ordinary con artists.”
there, I set out a normative claim: that, in light of the cognitive fallibility
of humans, advocates for the permissive use of cognition-enhancing drugs by the
healthy—if they hope to engender enough public support—probably need to look to
the decidedly non-Kantian methods of moral entrepreneurism and must account not
only for the cognitive flaws that can engender automatic opposition to
utilitarian initiatives but also those which can potentially sway indignant
opponents the other way. With the caveat that I was by no means presenting a
complete “game plan,” nor even advocating for such policy change myself, I
argued that these methods might simply be objectively necessary to such a
pursuit, meanwhile their use can be justified under Cass Sunstein’s banner of
“libertarian paternalism.” Furthermore, I noted that moral entrepreneurism has
been behind the most influential moral movements of history (including: Nazism,
the Partnership for a Drug-Free America’s 1980s anti-drug campaign, PETA’s
animal-welfare efforts, the U.S. civil rights movement, the global “go green”
initiative, the Church’s ever-present bigotry and intolerance, and even,
perhaps, the founding fathers’ initial promotion of constitutionalism).
science plays a large role in this article, so I would like to add a brief note
about the mixing of disciplines. Today, formal psychology is seen as a
relatively new and distinct field of inquiry, having arrived little more than a
century ago with the work of Wilhelm Wundt and William James. But from a
broader standpoint, psychology is, and has always been, the very core of all
division of labor, specialization, expertise—these modernizations have left us
with terms like “interdisciplinary approach”—but early Western thinkers were
“renaissance men” for the simple reason that the questions, “Why did he do
that?” and “What was he thinking?” have been pertinent to the understanding of
just about every human social interaction since the beginning of recorded
history. Law, economics, moral philosophy, political science—these are wholly
distinct concentrations that college freshmen and doctoral advisers alike must
choose from. But these disciplines manifest themselves in perfect harmony at
any dinner table where siblings conspire and compete while parents manipulate
beliefs and incentives in order to enforce rules based on moral intuitions and
community norms. In other words, the ideas and theories discussed in this
article (especially those that are empirically verified) are too important to
be ignored. Every discipline concerned with human social arrangements—law,
political theory, economics, sociology, philosophy, and, perhaps most especially,
journalism—must begin to take an “interdisciplinary approach” that accounts for
the most up-to-date understandings of the human mind, in order to avoid
inadvertently relying on outdated assumptions or “nibbl[ing] at the edge of
the end, “If experienced hucksters can identify defects in people’s reasoning
processes as part of an effort to swindle them”
or spread hate, or sell cereal, then policy advocates, who have only the
collective welfare in mind, should arguably be utilizing the human cognitive
defects for the “greater good.” Otherwise, their advocacy is liable to be
drowned out or ignored.
that said, there remains the interesting question whether human behavior and
decision-making might change or improve “once we have an understanding of the
operative principles of our moral knowledge and broadcast these principles to
an interested public.”
Marc Hauser, Professor of Psychology, Organismic and Evolutionary Biology, and
Biological Anthropology at Harvard, suggests in his book Moral Minds that “the moral faculty may guard its principles but
once its guard has been penetrated [by self-knowledge],” we might be able to
“use these principles to guide how we consciously reason about morally
This could also mean strategic engineering (i.e., via libertarian paternalism
and moral entrepreneurism) might become a less effective means to persuasion,
as humans may come to recognize such “marketing ploys” as manipulative.
possibility indeed remains open; however, Hauser speaks of broadcasting these
principles to an “interested” public—meanwhile, it might take some strategic
engineering just to engage the public’s interest in the first place!
a more concrete question is whether an article like this one—touting the
importance of “framing” and suggesting that such engineering may be a necessary
element of persuasion—could itself undermine the success of such engineering.
In other words, “[D]o the motives and strategies of moral entrepreneurs have to
remain hidden to work effectively?”
this there is no clear answer either; however, if intelligence indeed evolved
in the context of cooperation and competition
an escalating battle between “cheaters” and “cheater detectors”—the battle may
well continue. Strategic engineering may often be rendered useless by
quick-witted “cheater detectors”—for instance, we may now look back and laugh
at Lyndon Johnson’s “Daisy Girl” campaign ad, in which a sweet little girl
counts down daisy petals until a nuclear bomb explodes on the screen amidst the
implication that Barry Goldwater would invite mutually assured destruction if
elected. But the cognitive arms race apparently continues—ploys simply become
more and more sophisticated until the “cheater detectors” catch up and so
back, from the “Daisy Girl” ad, to the egg in the frying pan, to PETA, I
suspect that the underlying motives of marketers have always been somewhat clear to a majority of the lay public.
I suspect this is the case with most contemporary advertising. The presentation
tactics have long been strategic, but as the persuasive power of certain
tactics wane with time, new tactics emerge and are generally only more
strategic. Rigorous reason-based decision-making among the masses is surely
something to pine for, but widespread in-depth self-knowledge is its
prerequisite. Until then, the soundbite rules.
Partnership for a Drug Free America,
“This is Your Brain on Drugs,” http://www.youtube.com/watch?v=3FtNm9CgA6U.
See, e.g., Seth Stevenson, Aliens Don’t Do Drugs, Slate, June 25, 2007, http://www.slate.com/id/2168471/; and Famous Fried Eggs: Students
Debate Effectiveness, Accuracy of Well-Known Anti-Drug Commercial, CNN,
December 6, 2009,
Rhetoric, in The Basic Works of Aristotle 1318
(Richard McKeon ed., Random House 1941).
See Steven Pinker, The Blank Slate: The
Modern Denial of Human Nature 274 (2002) (“When someone strips a person
of dignity . . . ordinary people’s compassion can evaporate and they find it
easy to treat him like an animal or object.”).
id; see also Edward L. Glaeser, Paternalism and Psychology, 73 U. Chi. L. Rev. 133, 152 (2006) (noting
that soft paternalism can lead to the undesirable perception that those who
engage in certain behaviors are “unattractive human beings”).
Chomsky, Media Control: The Spectacular Achievements of Propaganda (2002) (emphasis added).
See Steven Pinker, How the Mind Works 370 (1997) “[T]he emotions are adaptations, well-engineered software modules
that work in harmony with the intellect and are indispensable to the
functioning of the whole mind.” Id.
triggered by a propitious moment, an emotion triggers the cascade of subgoals
and sub-subgoals that we call thinking and acting. Because the goals and means
are woven into a multiply nested control structure of subgoals within subgoals
within subgoals, no sharp line divides thinking from feeling, nor does thinking
inevitably precede feeling or vice versa.
Id. (“The passions are no vestige of an
animal past, no wellspring of creativity, no enemy of the intellect. The
intellect is designed to relinquish control to the passions so that they may
serve as guarantors of its offers, promises, and threats against suspicions
that they are lowballs, double-crosses, and bluffs.”) Id. at 412.
generally, Cass Sunstein, Behavioral
Analysis of Law, 64 U. Chi. L. Rev.
1175, 1178 (1997) (“Much of this work calls for qualifications of rational
Cass Sunstein and Richard Thaler
wrote a book called Nudge. Richard H. Thaler and Cass R. Sunstein, Nudge:
Improving Decisions About Health, Wealth, and Happiness (2008). See infra notes 15
accompanying text for more information.
See Sunstein, supra note 10
, at 1178.
Those who deride our “sound-bite”
discourse and preach the intellectual in-group norm of epistemic virtue are
trapped in a paradox. When it comes to using rational arguments to advocate epistemic
virtue, the preacher will not be able to persuade the congregation until the
congregation has already been persuaded.
See Pinker, supra note 6
, at 274–75 (“And the good reasons for a moral
position are not pulled out of thin air: they always have to do with what makes
people better off or worse off, and are grounded in the logic that we have to
treat other people in the way that we demand they treat us.”)
Sunstein calls for
“anti-antipaternalism,” which is basically to say maybe we should reconsider
flat out knee-jerk rejections of paternalism. See Sunstein, supra note 10
, at 1178.
See Richard H. Thaler and Cass R. Sunstein, Behavioral
Economics, Public Policy, and Paternalism: Libertarian Paternalism, 93 Am. Econ. Rev. 175 (2003).
id. at 179 (describing “libertarian paternalism” as an approach that
preserves freedom of choice but that authorizes both private and public
institutions to steer people in directions that will promote their welfare).
“Cognitive enhancement may be defined
as the amplification or extension of core capacities of the mind through
improvement or augmentation of internal or external information processing
systems.” Nick Bostrom and Anders Sandberg, Cognitive
Enhancement: Methods, Ethics, Regulatory Challenges 1,
For instance, the animal welfare
movement and the “go green” initiative.
Cf. Sunstein, supra note 10
, at 1179 (“Contrary to economic theory, people do not
treat out-of-pocket costs and opportunity costs as if they were equivalent.”).
Gerd Gigerenzer, Bounded and Rational, in Contemporary Debates In Cognitive Science 117 (R.J. Stainton ed., 2006).
See Pinker, supra note 6
, at 302–04.
See Robert Trivers, The Evolution of
Reciprocal Altruism, in Common Themes in Primate Ethics 84–88
(Peter Singer ed. 1994). See also, id.
See Trivers, supra note 30
; Pinker, supra note 6
, at 269–75.
See Pinker, supra note 5, at 271 (explaining that the moral emotions frame our
See Richard A. Posner, The Problematics of
Moral and Legal Theory 19 (1999) (“Every society . . . has had a moral
code, but a code shaped by the exigencies of life in that society.”); Pinker, supra note 9
, at 368 (“To us, cow urine is a contaminant and cow
mammary secretions are a nutrient; in another culture, the categories may be
reversed, but we all feel disgust for contaminants.”)
See Posner, supra note 33
, at 6 (“[W]hat counts as murder . . . varies
enormously from society to society.”).
See Richard Dawkins, The Selfish Gene, at vii (1976) (“We are survival machines—robot vehicles blindly programmed to
preserve the selfish molecules known as genes.”)
See Pinker, supra note 9
, at 373 (“The emotions are mechanisms that set the
brain’s highest-level goals.”)
See Pinker, supra note 6
Henry Greely et al., Towards Responsible Use of
Cognitive-Enhancing Drugs by the Healthy, 456 Nature 702, Dec. 11, 2008.
In some “games” (like baseball, for
instance), team B necessarily “loses” when the star player on team A hits a
game-winning home run. But when it comes to innovation, no one has to be a
“loser”—if an individual scientist develops an alternative source of energy,
Greely, supra note 39
generally, Jeremy Bentham,
Introduction to the Principles of Morals and Legislation ch. 1 (1789)
(“By the principle of utility is meant that principle which approves or
disapproves of every action whatsoever according to the tendency it appears to
have to augment or diminish the happiness of the party whose interest is in
See Bailey Kuklin and Jeffrey W.
Stempel, Foundations of the Law: An Interdisciplinary and Jurisprudential
Primer ch. 1 (1994) (“A state of affairs is better, under this version,
when aggregated goodness (often expressed in utility) is increased, even when
some individuals suffer losses in order to facilitate greater gains by
Nick Bostrom, The Infinitarian Challenge to Aggregative Ethics 1 (2008), http://www.nickbostrom.com/ethics/infinite.pdf
(asking how, if the universe is infinite, can we decide the scope of our
See Margaret Talbot, Brain Gain, The New Yorker, Apr. 27, 2009.
See Amartya Sen, The Idea of Justice 16 (2009).
Compare Judith Warner, Popping Pills: Cognitive
Enhancement, New York Times,
Nov. 30, 2008,
http://www.nytimes.com/2008/12/30/opinion/30iht-edwarner.1.19000670.html, with Randy Cohen, The Ethicist: Test Prep or Perp, New
York Times, Sept. 30, 2007,
http://www.nytimes.com/2007/09/30/magazine/30wwln-ethicist-t.html?pagewanted=print, and The Dilemma of Pills that Boost
Brain Power, Thestar.com, Dec. 14, 2008,
Fukuyama, Our Posthuman Future: Consequences of the Biotechnology Revolution 82
(2003); Bostrom and Sandberg, supra note
, at 34.
Fukuyama, supra note
, at 208.
Benedict Carey, Brain Enhancement is Wrong, Right?, N.Y. Times, March 9, 2008.
See Kuklin and Stempel, supra note 44
Kant, Fundamental Principles of the Metaphysics of Morals (1785).
Greely, supra note 39
(“Good policy is based on good information.”)
Id. at 38 (suggesting that public policy could anticipate certain concerns and
preempt them with safeguards).
In one instance, the precise headline
read: Is there Honor in Adderall. And
the first sentence of the piece asks, “Should Adderall be against the honor
code?” Stephanie Rudolph, Is there Honor
in Adderall, Bi-College News Online, Apr. 6, 2004, http://www.biconews.com/?p=3780.
For more examples, see infra note
See, e.g., Editorial: Just Say No to Study Drugs, Duke Chronicle, Jan. 13, 2009 (“It is . . . a morally
reprehensible means to get ahead in class. . . . Like other forms of academic
dishonesty, this behavior gives its users an unfair advantage over others. . .
. [It] is as dishonest as plagiarism and cheating on an exam.”); Neil Tambe, Culture of Immorality Needs a Hero, Michigan Daily, Mar. 5, 2009 (“[It]
provides an unnatural ability to focus . . . just like steroids. . . . Our
campus could use more people like Atticus Finch.”); Michael Bromberg, Adderall Addiction too Prevalent in Schools, UCLA Daily Bruin, Nov. 26, 2008
(“At a school as prestigious as UCLA, what will eventually come from a
generation of kids reliant on drugs to pass a class?”); Fenan Solomon, Adderall: Asleep to the Implications, University of Maryland - The Diamondback,
Oct. 24, 2008 (“So what happens to the students who take Adderall and break the
curve? Where's that in the honor code we sign before every exam? It's nowhere
to be found because the academic norms haven't been adjusted to address this
issue as the norms have adapted in the world of sports.”); Editorial: Adderall Not to Be Ignored, The Miami Hurricane, Sep. 17, 2008 (“While Barry Bonds defends
his position on steroid use, thousands of college students continue to consume
Adderall. . . . [Y]ou're not using it to be a better student: you're a drug
user.”). This is just a small sampling of my LexisNexis University Wire search
results. (University Wire is a service that archives college newspaper
Bostrom and Sandberg, supra note
, at 43 (“People with high social capital and good
information get access while others are excluded.”).
Some people will surely never want to
enhance their cognition at all, even with zero risk.
See John Rawls, A Theory of Justice 273–77 (1971) (explaining that neither moral
worth nor moral desert constitute one’s claims in such a scenario, but rather,
“when just economic arrangements exist, the claims of individuals are properly
settled by reference to the rules and precepts . . . which these practices take
An externality is a cost or benefit
of an action that the actor does not personally incur; thus, theoretically, the
actor would not take such a cost or benefit into account when deciding
beforehand whether or not to engage in the conduct. See William J. Baumol &
Alan S. Blinder, Microeconomics: Principles and Policies 234–35 (9th ed.
2003). For instance, if I own a factory that causes pollution in a nearby town,
but I do not live in that town, I do not have to endure the cost of living in a
polluted town, so that is a cost external to my business. Whether or not I will actually consider the cost or the consequences
of causing pollution is another story, but this is, at least, the theory behind
external costs. Id.
Happiness is relative. See, e.g., Robert H. Frank, Economic View: Does Money Buy Happiness?, New York Times, Aug. 30, 2006,
See Rawls, supra note 65
, at 11.
at 65 (“The intuitive idea is that the social order is not to establish and
secure the more attractive prospects of those better off unless doing so is to
the advantage of those less fortunate.”)
Rawls criticized utilitarianism for
its failure to take sincere account of the distinctions between persons. Id. at 156–57. Meanwhile, Michael Sandel
has pointed out that Rawls’s difference principle is similarly guilty of
ignoring the distinctions between persons and that this should technically be
fatal to a deontological theory, because self-ownership is the essential
justification for our first-order fundamental rights of freedom of speech,
freedom of conscience, and religious liberty, etc. See, Michael Sandel,
Liberalism and the Limits of Justice 140 (1998). Plus, if we do not own
our socially desirable conduct, how can we be accountable for our undesirable
See Pinker, supra note 6
Joshua D. Greene, Dual-Process Morality and the
Personal/Impersonal Distinction: A Reply to McGuire, Langdon, Coltheart, and
Mackenzie, Journal of Experimental
Social Psychology (2009), available
Joshua D. Greene, The Terrible,
Horrible, No Good, Very Bad Truth About Morality and What to Do About It 334
(Nov. 2002) (unpublished Ph.D. dissertation, Princeton University), available at,
See Joshua D. Greene et al., An fMRI
Investigation of Emotional Engagement in Moral Judgment, 293 Science 2105,
The trolley problem was first
introduced by Philippa Foot in 1978.
See Greene et al., supra note 76
, at 2106–07.
Stephen J. Morse, Rationality and Responsibility, 74 S. Cal. L. Rev. 251, 259–60 (2000–2001).
Hume, A Treatise of Human Nature 245 (1739).
Moore, Principia Ethica 65 (2nd ed. 1993) (1903).
See Pinker, supra note 6
, at 271 (“People have gut feelings that give them
emphatic moral convictions, and they struggle to rationalize the convictions
after the fact.”)
See Jonathan Haidt, The Moral Emotions,
in Handbook of Affective Sciences 853–66 (R.J. Davidson et al. eds. 2003); see
also, Pinker, supra note 6
, at 271–75.
Pinker, supra note 6
, at 271–75.
See Sunstein, supra note 10
, at 1194.
See Daniel Kahneman, Rationality Assumption, in Choices,
Values, and Frames 766 (Daniel Kahneman et al. eds., 2000).
See Sunstein, supra note 10
, at 1194.
See Sunstein, supra note 16
, at 177.
Nick Bostrom and Toby Ord, The Reversal Test: Eliminating Status Quo
Bias in Applied Ethics, 116
Ethics 656, 657, available at, http://www.nickbostrom.com/ethics/statusquo.pdf.
Bostrom and Julian Savulesu, Human Enhancement 2 (2008).
See Jeffrey J. Rachlinski, The Uncertain
Psychological Case for Paternalism, 97 Nw.
U. L. Rev. 1165, 1188 (citing Cass R. Sunstein, Selective Fatalism, 27 Legal
Stud. 799 (1998)).
Gigerenzer, supra note 24
, at 119.
(“Ecological rationality can be explained in comparative terms: a given
heuristic performs better in environment A than inB. For instance, imitation of
others’ successful behavior (as opposed to individual learning) works in
environments that change slowly, but not in environments under rapid change.”).
Pinker, supra note 9
, at 62.
See Robert L. Trivers, The Evolution of
Reciprocal Altruism, 46 Q. Rev.
Biology 35, 50 (1971); Pinker, supra note 9
, at 405; see
also, Bailey Kuklin, The Natures of Universal Moralities 14 (2009)
(unpublished manuscript on file with author).
Dawkins, supra note 35
, at vii.
See Leda Cosmides & John Tooby, Evolutionary
Psychology: A Primer, at 5, available
at, http://www.psych.ucsb.edu/research/cep/primer.html (“Why do we find
fruit sweet and dung disgusting? . . . All animals need circuits that govern what they eat—knowing what is safe
to eat is a problem that all animals must solve.”). See also, Robert Frank, Commitment
Problems in the Theory of Rational Choice, 81 Tex. L. Rev. 1789, 1796 (2002–2003) (arguing that one could
formulate a compelling rational choice model based on “adaptive rationality” by
which any goal is plausibly rational if it does not “compromise one’s ability
to acquire and hold resources in competitive environments”).
generally, Matt Ridley, The Red
Queen: Sex and the Evolution of Human Nature (1993); Robert Frank, Choosing the Right Pond: Human
Behavior and the Quest for Status (1985); Robert Frank, Passions within Reason: The Strategic Role of the
Emotions (1999); Kuklin, supra note 108
, at 26.
See Pinker, supra note 9
, at 429 (“The love of kin comes naturally; the love
of non-kin does not.”); Kuklin, supra note 108
, at 6 (“Kin selection is based on the proposition
that one may enlarge one’s contribution to the gene pool not only through one’s
direct descendents, but also through those who are genetically related.”).
See Trivers, supra note 108
; Haidt, supra note 83
See Robert Axelrod, Tit for Tat, in Common
Themes in Primate Ethics 88–91 (Peter Singer ed. 1994) (explaining that
the best strategy for an iterated Prisoner’s Dilemma is “tit for tat”—cooperate
on the first move, and then do whatever your opponent does for every move after
Irrationality helps because sometimes
it is important for people to believe you are not in control of your actions.
This is how you win a game like “Chicken” (where two people drive straight at
each other to see who will swerve first). In real life, that came is called
mutually assured nuclear destruction. See Pinker, supra note 9
, at 411. See
also, Robert H. Frank, Economics, in The
Sociobiological Imagination 97 (Mary Maxwell ed., 1991); Frank, supra note 110
, at 1797.
generally, Frank, supra note 112.
Pinker, supra note 9
, at 404.
See Trivers, supra note 108
and accompanying text.
id. See Pinker, supra note 9
, at 404.
See Pinker, supra note 9
, at 404.
Frank, supra note 115
, at 104.
Cosmides & Tooby, supra note 110
, at 20–21.
See Pinker, supra note 9
, at 336.
id.; Cosmides & Tooby, supra note 110
, at 20–21.
See sources cited supra note 115
and accompanying text.
Frank, supra note 115
, at 94–95.
Classical rational choice presumes
humans calculate costs and benefits and act in their self-interest. Id. at 91.
Frank, supra note 115
, at 94–95.
Id. See also, Axelrod, supra note 114
, at 88–92.
Frank, supra note 116, at 96–98; Axelrod, supra note 114
, at 88–92.
Robert Trivers, The Elements of a Scientific Theory of Self-Deception, in Natural
Selection and Social Theory (2002); Linda Babcock and George
Loewenstein, Explaining Bargaining
Impasse: The Role of Self-Serving Biases, 11 J. Econ. Perspectives 109 (1997).
Trivers, supra note 135
; Babcock and Loewenstein, supra note 135
. See also, supra note 13
and accompanying text.
Greely, supra note 39
, at 705.
See sources cited supra note 9
Jessica Reaves, The Great Debate Over Stem Cell Research, TIME Magazine, Jul. 11, 2001,
See Posner, supra note 33
, at 42 (noting that certain moral questions “stir
The audio can be retrieved here:
http://www.youtube.com/watch?v=z4Oh8uqMyWo, at forty-nine seconds in (last visited
December 7, 2009).
Miffed at President Obama’s Fly “Execution,” Reuters, Jun. 18,
Michael Specter, The Extremist, New
Yorker, Apr. 14, 2004, at 52; see also,
Gary Miller, Exporting Morality with
Trade Restrictions: The Wrong Path to Animal Rights, 34 Brook. J. Int’l. L. 999, 1007 (2009).
See Miller, supra note 147
, at 1008.
Specter, supra note 147
, at 52.
Miller, supra note 147
, at n.58 (“Newkirk founded PETA in 1980; today the
group boasts over two million members, annual donations in excess of $25
million, and over 50 million hits received at its various websites. For more
information, see People for the Ethical Treatment of Animals, About,
Specter, supra note 147
, at 60.
Singer has agreed that we must kill a
baby before a dog if the dog suffers more. See Richard A. Posner & Peter Singer, E-mail
Debates of Noteworthy Topics: Animal Rights, Slate, Jun. 12, 2001, http://www.slate.com/id/110101/entry/110109/.
Miller, supra note 147
, at n.103 (“Posner argues that humans already grasp
thoroughly that animals feel pain and that ‘to inflict pain without a reason is
bad’; thus, it is an altogether different task to persuade humans to stop causing animals pain. CITE my note
Posner, supra note 33
, at 39.
(“Moral entrepreneurs try to change the boundaries of altruism, whether by
broadening them, as in the case of Jesus Christ and Jeremy Bentham, or by
narrowing them, as in the case of Hitler.”)
Dan Kahan, The Logic of Reciprocity: Trust, Collective Action, and Law, 102 Mich. L. Rev. 71, 81.
Piracetam is a “nootropic” that has
been proven to enhance memory and is not regulated by the FDA, thus it can be
purchased legally on the internet in the United States.
See Sunstein, supra note 10
, at 1180.
Americans love to believe America is
“number one” (for the same reasons they love to “root for the home team”).
Another example of the framing
effect—an amusingly relevant one in fact—concerns Robert Nozick’s experimental
attempt to undermine utilitarianism. Nozick had asked if people would be
willing to plug into a sort of virtual reality experience machine—once in, they
would absolutely believe their experiences were real—and all their experiences
would be pleasurable. And he assumed people would largely reject the idea. See Robert
Nozick, Anarchy, State, and Utopia 42–43 (1974). But experimental
philosopher Joshua Knobe has pointed out that if people are told they have been
in the experience machine the whole time and asked if they would like to join
reality (essentially the plot of the film The
Matrix), they will prefer the machine. In both ways of framing the
question, Knobe attributes the differential responses to the status quo bias. See Joshua Knobe, Would You Be Willing
to Enter the Matrix, Psychology Today,
April 13, 2008, available at http://www.psychologytoday.com/blog/experiments-in-philosophy/200804/would-you-be-willing-enter-the-matrix.
Sunstein, supra note 10
, at 1181.
Josh Stephens, Green Nudges: An Interview with Obama Regulatory Czar Cass Sunstein, Grist, Apr. 6, 2009,
Nicholas Wade, The Evolution of the God Gene, New
York Times, Nov. 14, 2009, http://www.nytimes.com/2009/11/15/weekinreview/12wade.html.
See Stanford Encyclopedia of Philosophy, “Virtue Epistemology” (2004),
http://plato.stanford.edu/entries/epistemology-virtue/ (explaining the view
that it is virtuous for humans to be intellectually rigorous).
Rawls, supra note 65
, at 115 (“The publicity condition is clearly implicit
in Kant’s doctrine of the categorical imperative insofar as it requires us to
act in accordance with principles that one would be willing as a rational being
to enact as law for a kingdom of ends.”).
“People tend to think that risks are
more serious when an incident is readily called to mind or ‘available.’”
Sunstein, supra note 10
, at 1188.
Compare Joshua Greene and Jonathan Cohen, For
the Law, Neuroscience Changes Nothing and Everything, 359 Phil. Trans. R. Soc. Lond. B 1775, 1781
(2004) (predicting that “as more and more facts come in, providing increasingly
vivid illustrations of what the human mind is really like, more and more people
will develop moral intuitions that are at odds with our current social
practices”) with Kathleen D. Vohs and
Jonathan Schooler, The Value of Believing
in Free Will: Encouraging a Belief in Determinism Increases Cheating, 19 Psychological
Science 49 (2008) (presenting empirical data to support the claim, as
the title suggests, that as people come under the impression that they lack
free will, they are more likely to cheat).
See Jason Mazzone, The Creation of a
Constitutional Culture, 40 Tulsa L.
Rev. 671, 684 (2005). Mazzone notes:
leaders of the founding generation understood that the masses of citizens had
to be united together—that there needed to be some level of social cohesion of
constitutional government were to succeed. As Gordon Wood observed, ‘In
building . . . an integrated national state, the Federalist leaders saw their
principal political problem as one of adhesion: how to keep people in such a
large sprawling republic from flying apart in pursuit of their partial local
Kolber, How to Improve Empirical Desert, 75 Brook. L. Rev. 23 (2010).
See Posner & Singer, supra note 153
Greene, supra note 75
, at 334.
Rachlinski, supra note 102
, at 1165.
“Can you tell me, Socrates, can
virtue be taught? Or is it not teachable but the result of practice, or is it
neither of these, but men possess it by nature or in some other way?” Plato, Meno, in Five Dialogues 59 (G.M.A. Grube and
John M. Cooper eds. 2002).
Scott Fitzgerald, The Great Gatsby 20 (Scribner ed. 2004).
Rachlinski, supra note 102
, at 1165.
Hauser, Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong 159 (2006).
A quotation from an anonymous
reviewer of this article (May 20, 2010, 13:15 EST) (on file with author).