Please go to sambaker.com for the latest version of this tutorial. If you got the link to this page from someone, please tell him or her about the change. I'm retiring!

University of South Carolina, Arnold School of Public Health, Department of Health Services Policy and Management, Economics Interactive Tutorials August 24, 2007

Economics Interactive Tutorial (Instructions)

Risk Aversion

Copyright © 1999-2001 Samuel L. Baker

The interactive tutorial on risk calculated the expected values of some games and securities.  It made a connection between the expected value of a security and its market price.  This use of the expected value is based on the Law of Large Numbers, which is a statement about what happens when a game is repeated over and over and over again.  If a game can be repeated many times, good luck and bad luck tend to wash out.

What happens, though, if the game isn't going to be repeated over and over?  What if it's only played once, or a few times, in your life?  What if the stakes are very high, so that you are gambling with something that you really do not want to lose? That's when risk aversion comes in.

Risk averse means being willing to pay money to avoid playing a risky game, even when the expected value of the game is in your favor.

Let's find out how risk averse you are. If you are a student, I'm guessing that $10,000 is a lot of money for you. A gift of $10,000 would make your life noticeably easier. Losing $10,000 would make your life noticeably harder. If you're a well-paid executive or professor (Ha! Ha!), multiply my dollar numbers by ten, or a hundred.

Imagine that you're a contestant on a TV game show. You have just won $10,000. The host offers you a choice: You can quit now and keep the $10,000, or you can play again. If you play again, there is a 0.5 probability that you will win again, and wind up with $20,000. If you play again and lose, you lose your $10,000 and take home nothing. You quickly calculate that the expected value of playing again is $10,000, the same as sticking with the $10,000 you have won so far. Which do you chose?

Assuming that you answered the preceding question in a way that showed risk aversion, we can attempt to measure how risk averse you are. We can do this two ways:

  1. We can raise the payoff for winning on the second play above $20,000, and see if you still choose to stick with the sure $10,000.
  2. Alternatively, we can raise the probability of winning on the second play, and see if that changes your choice.
Either of those alternatives raises the expected value of playing again, so that it's above $10,000.  If we raise the expected value of playing again, will that induce you to take a chance?  Let's see...

For example, would raising the payoff to $22,000 for winning the second time enough to get you to take the risk and play?

 
 
 
 
 

Let's take the next step:  How high would the payoff on the second game have to be to entice you to take the risk, rather than keep the sure $10,000?

The higher your answer to this question, the more risk averse you are.

The Von Neumann-Morgenstern Theory

In my comment on your answer in the applet above, I snuck in the Von Neumann-Morgenstern theory of risk aversion, when I said that the amount you typed in "is worth twice as much to you as $10,000." What could that mean? Conventional economic theory imagines that each of us is always maximizing our "utility." Utility is like a meter in your head. The meter goes up when you acquire more stuff that you want. Economists imagine that utility is a quantity that has units, so it makes sense to say that alternative A has twice as much utility as alternative B. John Von Neumann and Oskar Morgenstern, co-authors of the pioneering The Theory of Games and Economic Behavior in 1944, developed the idea of risk aversion. They explained it by saying that money (income) has a declining marginal utility. Money, in other words, obeys the Law of Diminishing Returns: The more you have, the less utility you gain from getting a set amount more.

In this case, their theory would say that getting $10,000 from winning the first time brought you a certain amount of utility. Let's say it's 1 unit of utility. Winning $10,000 more dollars on top of that would not add 1 more utility unit to your utility. Instead, it would add X utility units, X being less than 1 and bigger than 0.

That makes your risk averse behavior rational. The utility for you of the $20,000 if you might win the second time is not twice as much as the value of your first $10,000. If the value of your first $10,000 is 1 utility unit, the value of $20,000 is 1 + X, which is less than 2. The expected value of a 0.5 chance of winning $20,000 becomes (1 + X)/2 utility units, which is less than 1. Thinking in terms of utility, rather than dollars, the expected value of playing again is less than value of keeping your $10,000.

Suppose a $30,000 payoff in the second game would be just enough to make you undecided between playing again versus taking the sure $10,000. Von Neumann and Morgenstern would say that, for you, it takes an additional $20,000 to add as much to your utility as the first $10,000 did. In other words, if $10,000 is worth 1 "util" (short for the maginary utility unit) to you, then $30,000 must be worth 2 utils.  That way, both standing pat and playing again have the same expected value, 1 util.

An applet further down the page draws a graph based on your choice of how risk averse you are (within limits). Click on the amount (up to $100K, meaning $100,000) that would just barely entice you to take a 50% chance of gaining it, while risking a sure $10,000. Before you make that choice, let me explain the graph.

For the graph, I arbitrarily assign a utility value of 0 to having no income. I assign a utility value of 1 to $10,000. Once you pick a dollar amount that would tempt you to play again, I give that amount the utility value of 2.  This follows the Von Neumann-Morgenstern theory that if you are indifferent between a 50% chance of getting $X and a 100% chance of getting $10,000, then the utility of $X must be twice the utility of $10,000.

That gives me three points, ($0, 0), ($10000, 1), and ($X, 2), which I plot on the graph with black dots. Then I draw the red curve, through the points, to try to represent the utility to you of all amounts of money from $0 to $100,000.

The red curve has this formula: Here, X is income, Y is utility, and A is a scaling constant determined by my assignment of a utility of 1 to a $10,000 income amount. b is the elasticity of utility with respect to changes in income. (We'll explain what this means shortly.) This power function form is arbitrary, but it's as good as any.

If the income amount corresponding to a utility of 2, twice the utility of $10,000, is bigger than $20,000, then the curve bends down -- it's "concave down." Its slope gets smaller as you go to the right, towards higher income amounts. The slope of this curve at any point represents the marginal utility of income.  In other words, the slope represents the addition to utility that results from an additional dollar of income at that point. If the curve is concave down, you have a declining marginal utility of income.

As mentioned, b is the elasticity. The elasticity is how many percentage points one thing changes when something that affects it changes by 1%. In this functional form the exponent, b, is the elasticity of Y with respect to X. When X (income) changes by 1%, Y (utility) changes by b%. For risk averse people, b is less than 1, so utility is relatively inelastic with respect to changes in income.

To summarize, Von Neumann and Morgenstern recast your choice from being about different amounts of dollars to being about different amounts of utility.  This lets them preserve the principle that one should always choose the alternative with the highest expected value.  The down side is that they have to bring in an imaginary quantity, "utility," that's impossible to observe directly.

Maurice Allais, a Nobel Prize winner, pointed out that people don't always behave as the Von Neumann-Morgenstern theory says. Here's a version of one of his examples:

Imagine that you may spin one of two Wheels of Fortune, A and B. Each has 100 numbers on it. For each wheel, the probability that any particular number will come up on any one one spin is 1/100. You only get to play this game once. The wheels pay off like this:
 
Wheel A Wheel B
Numbers 1 through 10 pay $1 million Numbers 1 through 9 pay $5 million
Numbers 11 through 100 pay nothing Numbers 10 through 100 pay nothing
Wheel A offers a slightly better chance of getting rich.  Wheel B offers a lot more money if you win. Which would you choose?

Now consider this game, which you'd also get to play only once:
Wheel C Wheel D
Numbers 1 through 100 pay $1 million.  1 through 9 pays $5 million 
You are sure to win $1 million. 10 through 99 pays $1 million
100 pays nothing
Which would you choose now?

I'm leaving some space here so you won't be influenced by what you are "supposed" to pick.
 
 
 
 
 
 
 

The Von Neumann-Morgenstern theory implies that you'd choose the first wheel both times or the second wheel both times. Wheel C is just like Wheel A, except with 90 added chances to win $1 million. Wheel D is just like Wheel B, except with 90 added chances to win $1 million. Standard theory says (assumes, actually) that if you like a certain something, call it A, better than something else, call it B, then you will like A + X better than B + X, regardless of what X is.

Allais observed that this doesn't always work with risk, because people have a different attitude toward large risks than toward small risks. Think of the small risks you take, just crossing streets when you're walking around town.  Allais doesn't dispute the idea of risk aversion.  He just questions whether people are as "rational" about the risks they face as the Von Neumann-Morgenstern theory assumes. Allais's ideas have relevance to the analysis of the politics of health and safety regulation.

Insurance and Risk Aversion

Enough fun and games. Back to something serious -- insurance.

The theory of risk aversion was developed largely to explain why people buy insurance.

Buying insurance is hard to justify using the theory of expected value. That's because buying insurance is a gamble with a negative expected value, in dollar terms.

Regular indemnity insurance specifies that you get money if certain things happen. If those things don't happen, you don't get any money. So far, that's just like playing roulette.

In return for the chance to get some money, you bet some money by paying the insurance company a "premium." The insurance company keeps your premium if you don't have a claim. So far, that's still just like playing roulette.

Over the large number of people who sign up for insurance, the insurance company pays out less money in claims than it takes in. If it doesn't, it can't stay in business. Your expected payoff is therefore less than your premium, unless you can fool (or legally force) the insurance company into selling you a policy with odds are in your favor. Buying insurance has a negative expected value. Still just like playing roulette.

Except that insurance is worse than roulette. Roulette's expected value is about -5% of the amount bet. The expected value of a health insurance policy is about -10%, relative to the amount bet, the insurance premium paid.  This is for the best-regarded health insurance plans, like Blue Cross. It's what they mean when they say that their "medical loss ratio" is about 90%.

Some insurers sell policies to individuals or small groups with medical loss ratios of 60% or less. These policies' expected value is -40% of the amount bet, or worse.

We have this, therefore:
Type of Gamble Expected value,
relative to amount bet
Individual health insurance You lose 40% of each bet, on average.
Large group health insurance You lose 10% of each bet, on average.
Roulette You lose 5% of each bet, on average.

We conclude that roulette is much better deal than health insurance.
Right?


 
 
 
 

Going without insurance generally does have a higher expected value than going with insurance, but the risk is much greater without insurance. This is the opposite of roulette, in which you assume a risk by playing. In insurance, you pay the company to assume a risk for you.

A risk averse person will pay more than the expected value of a game that lets him or her avoid a risk.  Suppose you face a 1/100 chance of losing $10,000.  Risk aversion means that you would pay more than $100 (the expected or "actuarially fair" value) for an insurance policy that would reimburse you for that $10,000 loss, if it happens.

Suppose there are many people like you, and you'd each be willing to pay $110 to avoid that risk of losing $10,000. You all could join together to form a mutual insurance company, collect $110 from each member, pay $10,000 to anyone who is unlucky and loses, and come out ahead, probably. The more participants in your mutual insurance company, the more likely it is that you'll have money left over for administrative costs and profit.  That's how insurance companies with "Mutual" in their name got started.

How can an insurance company assume all these risks?  Isn't it risk averse, too?

The insurance company can do what an individual can't: Play the game many times and get the benefit of the Law of Large Numbers.  The larger an insurance company is, the better it can do this.

Insurance Spreads Risk

Suppose you and four friends each face a 0.01 chance of losing $10,000. If you can trust each other, you can form a risk pool. You agree that if any of the five of you suffer the $10,000 loss, every one of you will give the loser $2,000, one-fifth of $10,000.

(Lloyd's of London operates this way, basically. Participants agree to come up with the money if a loss happens. The person who wants risk protection, such as a ballet dancer who wants compensation if his or her legs are hurt, pays a group of participants to share the risk. For simplicity, our risk pool doesn't have a payment, but in real life it would incur some administrative cost that the members would have to pay.)

This arrangement helps you reduce your risk. You can only lose $10,000 if all five members of the group get unlucky and lose the $10,000. The probability of that is .01 times .01 times .01 times .01 times .01, which equals .0000000001, or one chance in ten billion.

The tradeoff is a bigger chance of losing a smaller amount of money.  The table below shows the probabilities that you face, as a member of the risk pool. (The probabilities in the 4th column are calculated using the binomial distribution. The assumption is that bad events are independent of each other.)
Probabilities of losing different amounts in a 5-person risk pool
Claims
How many people in the pool
get unlucky and lose $10,000
Total claim cost to the pool
$10,000 times the number of claims
Cost per pool member
Total cost divided by 5,
the number of pool members
Probability
of this much cost
Running total probabilty
The probability of this much cost or less
0 $0 $0 0.9509900499 0.9509900499
1 $10,000 $2,000 0.0480298005 0.9990198504
2 $20,000 $4,000 0.0009702990 0.9999901494
3 $30,000 $6,000 0.0000098010 0.9999999504
4 $40,000 $8,000 0.0000000495 0.9999999999
5 $50,000 $10,000 0.0000000001 1.0000000000

The row for 1 claim tells us that there is about a 5% chance that you will lose $2,000. The cell on the right end of that row says that the chance of losing $2000 or less is 0.9990198504. Subtract that from 1, and we get that the probability of losing $4,000 or more is 0.0009801496. This is about one chance in a thousand. From the row for 3 claims, we can similarly calculate that the chance of losing $6,000 or more is about 1 in 100,000.

This risk pool of five is better than being on your own, but you can do even better if you can get more people to join.

Expand the pool to 100 members, and the probability that you will lose as much as $2,000 drops to 0.000000000000000000024.  That's small!  There is less than 1 chance in 10,000 that you will lose $700 or more.
Claims Total Cost 
to Group
Cost per member 
(100 members)
Probability Running total 
of probabilities
0 $0 $0 0.366032 0.366032
1 $10,000 $100 0.369730 0.735762
2 $20,000 $200 0.184865 0.920627
3 $30,000 $300 0.060999 0.981626
4 $40,000 $400 0.014942 0.996568
5 $50,000 $500 0.002898 0.999465
6 $60,000 $600 0.000463 0.999929

The bigger the pool, the smaller your risk of losing a large amount of money.  At the same time, the bigger the pool, the more likely it is that you'll have to come up with a small amount of money, because somebody in the pool will be unlucky and you'll have to pay your share.

On top of your share of other pool members' looses, you'll also have to pay your share of the cost of operating the pool.  If the pool is a profit-making business, you'll also have to pay your share of profit sufficient to attract entrepreneurs to the insurance business.  Those administrative costs and profits are not included in the tables above.

Most insurance companies, other than Lloyd's of London, do not require you to pay more money if they have unusual losses (other than by raising the premium for the following year, which you can decline to pay if another insurance company can offer you a better deal).  Instead, the company takes the risk.  The advantage of size, though, is similar.  The bigger a company is, the less "reserves" of money it needs, relative to the value of its business, to have any specified probability of being able to pay all its claims from those reserves.

Mutual funds are risk pools, too.  For an individual, buying "junk bonds" -- bonds with a high yield but a significant risk of not paying off -- is risky.  Mutual funds spread the risk, by holding junk bonds, or risky stocks, of a wide variety of companies.  By buying shares in a mutual fund, you get, in effect, small holdings in many companies.  Unless there is some general market collapse, you are safe from financial disaster.  Risky companies benefit, too.   They can sell their stocks and bonds to mutual funds without having to pay as much extra to compensate for the risk as they would if selling to risk averse individuals.

Government Insurance

What is the nature of the risk you face when you buy insurance?  Economists often regard risks like those of having to pay big medical bills as simply part of life.  But in a country with a stable government and economy, the risks for which we must buy insurance are the risks that the government chooses not to cover.

The risk of illness and death is part of the nature of life.  However, the risks of financial loss, or of needing medical care that one cannot finance oneself, these risks are socially determined.

We can and do buy health insurance for which our expected loss is 10% of what we pay.  If the cost of operating the insurance system could be reduced to 5% of premiums or less, we could afford to buy more insurance, and get more peace of mind.  Or we could just pocket the difference in cost.  Cost and profit shares below about 10% are hard to achieve in private insurance companies, because private insurance must be sold and premiums collected.  This requires a sales force, office staff, and equipment.

In the U.S., Medicare, which serves the elderly nearly universally, has administrative costs below 5% of what it pays out.  Medicare piggy-backs on the Internal Revenue Service's tax collection operation for collecting its premiums.  No one has to be sold on Medicare, and Medicare doesn't need its own bill-collecting system.  That saves money.

Other countries' national health insurance or national health services save even more through financing simplification that universal coverage makes possible.  In Canada, hospitals do not have to generate bills, and the government insurance does not have to process them, nor do individuals have to deal with the paperwork.  (The main exceptions are vistors from the U.S.)  Instead, hospitals are paid by global operating and capital budgets, essentially a lump sum each year.

This makes a public system more efficient.  The "house" can take a smaller percentrage.  People can cover more risks for less cost.  That's part of why Canada's system, which covers everybody there, costs less than the U.S. system.  Competition among private firms can reduce some costs, but not the costs of competition itself.  Our private market can't get to that level of efficiency by itself.  It'll take legislative action.

Thanks for participating! 



Please e-mail comments to
Back to the list of tutorials
To Health Services Policy and Management Department page
The views and opinions expressed in this page are strictly those of the page author. The contents of this page have not been reviewed or approved by the University of South Carolina.
http://hspm.sph.sc.edu/Courses/ECON/Risk/RiskA.html