Skip to main contentSkip to navigationSkip to navigation
You're given a choice of tips in New york cabs.
You’re given a choice of tips in New york cabs. Photograph: Chris Hondros/Getty Images
You’re given a choice of tips in New york cabs. Photograph: Chris Hondros/Getty Images

Endless options can be exhausting. We need to know when choice matters

This article is more than 9 years old

Behavioural research suggests that we are prone to inertia, says the author of Nudge – hence we need to nurture our ability to discriminate

All over the world, taxis have installed credit card touchscreens, which makes three possible tips visible and simple for customers to select with a quick “touch”. In New York City, the suggested amounts are usually 20%, 25% or 30%. People are free to give a larger tip, a smaller tip or no tip at all, but it is easiest just to touch one of the three conspicuous options.

What are the effects of the suggested numbers? The economists Kareem Haggag and Giovanni Paci compiled data on more than 13m New York taxi rides. They found that the touchscreen has led to a significant increase in tips – by an average of more than 10%. If a driver makes $6,000 in tips in a year, the touchscreens lead to an automatic $600 raise; and the taxi industry as a whole will receive many millions of dollars in additional annual revenue.

The suggested tips on touchscreens can be understood as “defaults”, which establish what happens if people make little or no effort. Behavioural scientists have found that, in countless settings, defaults have a massive impact on our lives. Much of the time, human beings choose not to choose. If that is your choice, the default is going to be decisive.

Suppose, for example, that an employer enrols you in a pension plan, or that your computer has a particular privacy setting, or that your hire car agreement has certain terms that will govern unless you alter them. In all of these cases, there’s a good chance that you’ll take the path of least resistance – and do nothing at all.

Defaults are powerful for two major reasons. The first is that they convey valuable information. If a touchscreen specifies three options, people might well think that it reflects a social norm about what’s appropriate or fair. And if your employer or your government enrols you in a particular pension plan, you might think that they’ve made an informed choice (unless you distrust them).

The second reason is that human beings are prone to inertia. We want to conserve on mental effort and we tend to like simplicity. Passengers might think that the right tip is 15%, or that the driver did not do a great job, but as they’re exiting a taxi, it’s easier just to tap one of the default numbers.

For a pension plan, a privacy setting or a hire car agreement, any change in the default will require an expenditure of effort. Some of us will expend that effort, at least if we think that the stakes are high, but a lot of us won’t.

In my new book, Choosing Not to Choose, I elaborate these points in some detail in an effort to understand exactly why default rules are so effective. One of my principal goals is to show that, in multiple domains, it is possible to achieve important public policy goals with the help of such rules and without forcing anyone to do anything. Especially in the modern era, where people are often overwhelmed, there is a great deal that can be said in favour of sustained focus on improved default rules.

Aware of the behavioural findings, and of people’s frequent reluctance to choose, both private companies and governments have been devoting a lot of thought on how to use defaults to save money, to improve education and to promote public health and safety.

For the future, some of the most intriguing and important questions involve personalisation. Might it be possible, and best, to go beyond the use of large-scale defaults, adopted for whole populations, and instead to personalise them, so that they fit people’s individual circumstances? In the context of pension plans, for example, personalised defaults make a lot of sense. People in their 20s and 30s should have different plans from people in their 50s and 60s.

In the US, some employers are recognising this point and using demographic information to default people into different investments (while of course allowing them to opt out). For health insurance, personalised defaults are also the wave of the future, because one size cannot fit all.

As large data sets accumulate, we could readily imagine much more ambitious approaches. Institutions already know, or can easily learn, what you, or what people like you, have chosen in the past, and they might use that information to devise default options for the future.

Amazon.com, Netflix, and many other companies are doing something very much like that. Aware of your own previous choices, they suggest books and movies that you are likely to like – and generate something like default suggestions, meant just for you.

That might seem alarming, but it is also a major convenience. And when the stakes are high, as in the contexts of medical care and financial planning, personalisation might prove to be a great boon, simply because it will provide people with outcomes that fit their personal situations.

At the same time, the use of personalised defaults does raise two serious concerns. The first involves privacy. Many people are not exactly enthusiastic to learn that private companies, or public officials, have monitored their past choices, even if the goal is to generate suggestions that will serve them well in the future. If people want to keep those choices private, they will have a legitimate and possibly fierce objection to those who seek to exploit them to produce defaults.

The second concern involves the value of active choosing. While it is often sensible for people to rely on defaults, it’s also important for us to learn. The rise of personalisation, and the increasing accuracy of defaults that have been selected for us, have a serious downside: they make it ever more tempting to operate on automatic pilot, rather than to investigate and to choose on our own.

Among other things, that kind of investigation increases our stock of knowledge, and broadens our horizons. We’re in the early stages of the era of personalised defaults. For most of us, it will be a blessing, not a curse. But it should not be taken to obscure the potential risks of choosing not to choose, whether the issue involves retirement plans, health insurance or tips in a taxi.

Cass Sunstein is a professor at Harvard Law School, and has worked for the Obama administration. He is the co-author of the highly influential Nudge (2008). His new book is Choosing Not to Choose

Most viewed

Most viewed