ECON 4051: Financial Asset Pricing Lecture 6: Risk, Risk Aversion

Transcription

ECON 4051: Financial Asset Pricing Lecture 6: Risk, Risk Aversion
ECON 4051: Financial Asset Pricing
Lecture 6: Risk, Risk Aversion, and Expected Utility
In the last three lectures (3, 4, and 5), we focused on approach I. With
the weakest assumptions on agents –rational and monotone, we studied two
extreme economies: The (complete) Arrow-Debreau economy and the general incomplete security market economy. In the general incomplete security
market economy, we have derived a very important result:
Fundamental Asset Pricing Theorem: For a given price-payo¤ couple
(S; X), there are no any arbitrage opportunities if and only if there is
a positive state price vector f ! g!=1 such that
Sn =
> n
x ; n = 1; 2;
; N:
For this theorem, we still have some questions. For example,
Question 1: What does
mean? What does it represent?
Question 2: Where does it come from?
Question 3: What role does the objective probability
the prices?
play in determining
No clear clue yet! That is, no free lunch or no arbitrage principle does
not tell us anything about these questions. Although the fundamental asset pricing theorem has told us much information about asset pricing, it is
still not enough. To get concrete idea about what determine the prices of
assets, we have to return to equilibrium approach since the fundamentals of
the economy will decide the prices. As we mentioned earlier, equilibrium
approach needs more assumptions than the no arbitrage approach.
That is, to handle the more general security economy (not necessary the
complete market economy), we have to impose more assumptions on agent’s
preferences than that in approach I.
Approach II: Imposing more assumptions on agents’preferences and trying
to identify the factors determining the prices of assets.
1
More speci…cally, we make the following assumptions on agents:
Assumption: Agents are not only utility maximizers, but also the expected utility maximizers with respect to the objective probability
.
In other words, we require not only the agent to have utility function to
represent his preferences, but also to have particular utility function form:
the Expected utility with respect to the objective probability .
For approach II, we are going to give three lectures: Lecture
1. I introduces risk, risk aversion and expected utility;
2. II focuses on portfolio selection; and
3. III talks about asset pricing.
1
Risk
We …rst ask a fundamental question:
Question: What is risk?
Let us …rst look at a simple example:
Example 1 Consider two assets
Asset a
1 if state 1
%
! 1 if state 2
&
1 if state 3
Asset b
1 if state 1
%
! 2 if state 2
&
0 if state 3
and
Which one is riskless (riskfree)? Asset a: Which asset is risky? Asset b: Do
you know why?
2
And this leads us to de…ne
De…nition 1 Say an asset (or a security) x is risky if its payo¤s are di¤erent
at least at two di¤erent states; Otherwise it is riskless or riskfree.
Given two risky assets x1 , x2 , could we say one is riskier than the other
one? In what sense? More generally, could we measure the degree of risk of
one asset?
This is a tough question. Up to today, the answer is very incomplete and
there is no satisfying clue yet! Only in some special cases, we can say one
asset is riskier than another one. However, we can always tell the di¤erence
between riskless and risky assets.
So, we move to another topic.
2
Risk Aversion
We begin this section with a question:
Question: Are you risk averse, neutral, or loving? In what sense?
More speci…cally, how do we know whether one person is risk averse,
neutral, or loving?
The key idea is to let the agent compare one risky asset x with a
riskless one.
Can you imagine why we let the agent compare with riskless one rather
than another risky one?
Since we know which asset (or situation) is risky or riskless for sure, but
may not tell the di¤erence between two risky assets in the sense
that one is riskier than other one!
For a given risky asset, which riskless one should be used to compare with?
The best choice is the average payo¤ E(x) of x since E(x) is riskless and
also relates to risky asset x:
Next is our key de…nition.
De…nition 2 Say that an agent with preference
3
is risk
1. averse if for any security x,
E(x)
x or U (E(x))
U (x);
2. neutral if for any security x,
E(x)
x or U (E(x)) = U (x); and
3. loving if for any security x,
E(x)
x or U (E(x))
U (x):
We have some questions about the de…nition.
Question 1 Must any person be one of the three possibilities? In other
words, must a person be either risk averse, or risk neutral, or risk loving?
Not necessary. It is very possible that one person is risk nothing! Say
one person is risk nothing if there exist at least two securities x; y such that
E(x)
x
E(y)
y:
but
Question 2 To de…ne risk aversion, do we have to assume agents to have
particular preferences or particular utility functions? For example, to have
expected utility.
No. We just assume the agents can compare assets x and its average
E(x):
It is noted that we say that one agent to be risk averse (neutral, loving,
or nothing) means he is always risk averse (neutral, loving, or nothing) no
matter how poor or rich he is now and will be in the future, how old or young
he is. In reality, one agent is risk loving when he is poorer, risk averse when
he becomes richer.
This is why
the poor countries often have revolutions or wars and the
political situations are unstable.
4
Also, one agent is risk loving when he is younger, risk averse when he
becomes older. This is why
younger or poorer people’ car insurance premium rates are
higher than older or richer people in general.
But we will ignore this in this course.
How to say one agent is more risk averse than another agent?
De…nition 3 Say agent A is more risk averse than agent B if for any
security x and any riskless asset x;
x
B
x implies x
A
x:
Please note that here x is not necessary to be E(x):
Can you imagine why the above de…nition makes sense? Why agent B
prefers asset x to asset x? Please note here x is riskless and x is risky. Thus,
one possibility (not necessary) is that agent B is risk averse. Thus, the
de…nition says that agent B 0 s risk aversion implies to agent A0 s risk aversion.
That is, agent A is more risk averse than agent B.
We also have some questions about the de…nition.
Question 1. Suppose that agent A is more risk averse than agent B. Does
this imply that agent A must be risk averse?
No, not necessary. Agent A could be risk anything. That is, agent A
could be risk averse, risk neutral or risk loving or even risk nothing.
Question 2. Suppose that agent A is more risk averse than agent B. Does
this imply that agent B must be risk averse?
No, not necessary. Agent B could be risk anything. That is, agent B
could be risk averse, risk neutral or risk loving or even risk nothing.
Question 3. Suppose that agent A is more risk averse than agent B. Is it
possible that agent A is risk averse, but agent B is risk neutral or even
risk loving?
Yes, it is very possible. Finally, we ask
5
Question 4. Suppose that agent A is more risk averse than agent B. Is it
possible that agent A is risk neutral or even risk loving, but agent B is
risk averse?
No, it is impossible.
Suppose one agent is risk averse. Though we assume he will be always
risk averse no matter how rich or poor he will be in the future, however, the
following question still makes sense:
Question: Suppose that one agent is risk averse. When he becomes richer
and richer, could we say that he become more risk averse or less risk
averse? Does this question make sense? If yes, how to describe this?
Yes. The question makes senses since before becoming richer, he is agent
A; after becoming richer, he is agent B. Then using the de…nition of comparing two agents who is more risk averse.
Question: Now suppose that one agent is risk nothing. When he becomes
richer and richer, could we say that he become more risk averse or less
risk averse? Does this question make sense? If yes, how to describe
this?
Yes. The question still makes senses even the agent is risk nothing since
before becoming richer, he is agent A; after becoming richer, he is agent B.
Then using the de…nition of comparing two agents who is more risk averse.
These questions led us to de…ne:
De…nition 4 Say that an agent is decreasing risk averse (DRA) (that is,
less risk averse) if for any security x and and any riskless asset x,
x + w1
x + w1 implies x + w0
x + w0
for any two wealth levels with w1 > w0 :
That is, one agent is decreasing risk averse if he becomes less and less
risk averse when he becomes richer and richer.
Similarly,
6
De…nition 5 Say that an agent is increasing risk averse (IRA) that is, more
risk averse) if for any security x and and any riskless asset x,
x + w0
x + w0 implies x + w1
x + w1
for any two wealth levels with w1 > w0 :
That is, one agent is increasing risk averse if he becomes more and more
risk averse when he becomes richer and richer.
Based on the above two questions, the following question seems interesting:
Question 3 After becoming richer and richer, must you become either DRA
or IRA?
No, not necessary.
Next we will introduce next important concept: risk premium or the
price of risk.
Suppose now you own a risky asset. It means you have some risk in the
future. How to avoid taking this risk? There are two methods: Method
1: Selling the risky asset and receiving the constant price of the risky asset
today; and Method 2: Buying full insurance for avoiding any loss, but paying
a certain amount of the insurance premium today.
Method I: Selling the risky asset and receiving the constant price
of the risky asset.
For selling the risky asset, the question is at least how much are you
going to ask?
Suppose at least you are going to ask CE(x). CE(x) is riskless, a certain
amount of consumption good.
How to decide CE(x)? Although you want to sell the risky asset, but you
don’t have to. That is, you have two options to deal with it. One is to sell it
and you will get the (minimum) price CE(x), the other is not to sell it and
you will still keep the asset x: Thus, CE(x) must be such that there is no any
di¤erence between the two options. That is, there is no di¤erence between
selling and keeping the risky asset,
CE(x)
7
x:
More speci…cally,
U (CE(x)) = U (x):
Such CE(x) is called the certainty equivalent of asset x.
But in some cases, you want to or have to hold the risky asset. For
example, you want to hold a car for which you may have a car accident, a
house which may be on …re. You can buy full insurance to avoid taking the
risk. Here full insurance means that there is no any risk after buying the
full insurance. That is, the total your payo¤ in the future is constant after
buying the full insurance.
Method II: Buying a full insurance for avoiding any loss, but paying a
certain amount of the insurance premium.
For buying the full insurance, the question is at most how much you are
going to pay?
Suppose at most you are going to pay (x). How to decide (x)? Although
you want to buy the full insurance for avoiding the risk, but you don’t have to.
That is, you have two options to deal with it. One is to buy the full insurance
for avoiding the risk and you will pay the (maximum) price (x), the other is
not to buy any insurance and you just keep the asset x and take the risk by
yourself: Thus, (x) must be such that there is no di¤erence between buying
the full insurance and not buying the full insurance. The question is what
constant payo¤ you should get after buying the full insurance.
Let us look at some examples …rst.
Suppose you own a house right now. Its values tomorrow have the following two possibilities
Your house
%
&
$1 million with probability 0.98
$0:1 million with probability 0.02
l
today
l
tomorrow
That is, if there would be a …re on the house, you would lose $0.9 million. If
you bought the full insurance, you would get constant payo¤ $1 million no
matter what would happen.
8
Now suppose your house values tomorrow have the following three possibilities
$1 million with probability 0.08
Your house
%
! $0:5 millions with probability 0.5
&
$0:1 millions with probability 0.42
l
today
l
tomorrow
What constant payo¤ you should get after buying the full insurance? Of
course, it still could be $1 million = max f1; 0:5; 0:1g, but you have to pay
very high premium. In insurance industry, the constant payo¤ is the average
payo¤
E (Your house) = 1
0:08 + 0:5
0:5 + 0:1
0:42 = 0:372:
We use the average E(x) as the constant payo¤ (Please note it is di¤erent
from reality. Can you imagine where?). That is,
E(x)
(x)
x or U (E(x)
(x)) = U (x):
Such (x) is called the risk premium of the asset x.
What relationship between the certainty equivalent of asset x; CE(x),
and the risk premium of the asset x; (x)?
Since
CE(x) x E(x)
(x);
under some conditions, we have
CE(x) = E(x)
(x) or (x) = E(x)
CE(x):
Using the above results, we can prove
Claim 1 The following three statements are equivalent:
1. An agent is risk averse;
2. (x)
0; and
9
3. E(x)
CE(x):
After de…ning CE(x) and (x); the next question is: How to calculate
them? That is, CE(x) =? and (x) =?
Generally speaking, it is too hard to calculate them. To get (x) and
CE(x), we have to assume agents to have particular utility functions. For
example, expected utility function. Next, we move to expected utility and
will be back to (x) and CE(x) later.
3
Expected Utility
For given a consumption bundle c = (c0 ; c1 )
c0
l
t=0
with probability
with probability
..
.
c11
c12
..
.
%
!
& c1
c1
1
with probability
with probability
l
t=1
1
2
1
Please remember one and only one of consumption path (c0 ; c1! ) will be
realized at time 1.
If state 1 happens, you will consume consumption path (c0 ; c11 ) and get
utility u1 (c0 ; c11 ) at time 1; If state 2 happens, you will consume consumption path (c0 ; c12 ) and get utility u2 (c0 ; c12 ) at time 1;
; If state happens, you will consume consumption path (c0 ; c1 ) and get utility u (c0 ; c1 )
at time 1.
But only one of the above consumption paths would be realized, what is
your expected utility at time 0, instead of at time 1?
At time 0, you know only
consumption path (c0 , c11 ) would happen with prob. 1
consumption path (c0 , c12 ) would happen with prob. 2
consumption path (c0 , c1 ) would happen with prob.
10
One thing you can do is to take the average or expectation with respect
to the objective probability :
U (c0 ; c1 ) = u1 (c0 ; c11 ) 1 + u2 (c0 ; c12 ) 2 +
+ u (c0 ; c1 )
P
=
u! (c0 ; c1! ) ! — expected utility.
!=1
Please remember here we have a big U and a little u! : The little u! (c0 , c1! )
is the utility at time 1 derived from the particular consumption path
(c0 ; c1! ) only and is called utility index. The big U is the EXPECTED
utility (Why expected?) at time 0 derived from all the consumption
paths
f(c0 ; c11 ) ; (c0 ; c12 ) ;
; (c0 ; c1 )g
and is called the utility of consumption bundle (c0 , c1 ). Thus, the big U and
the little u are totally di¤erent, but they have the following relationship
U (c0 ; c1 ) =
P
! u! (c0 ; c1! ):
!=1
State Independent Expected Utility
If the utility index
u! = u
does not depend on the state !; we call it state independent expected utility.
From now on, we assume the utility index u is state independent. In other
words, we have the following expected utility function form:
U (c0 ; c1 ) =
P
! u(c0 ; c1! ):
!=1
Next are some special cases or extensions:
Type I: Time additive utility index (special case)
Say utility index u(c0 , c1! ) is time additive or time separable if
u(c0 ; c1! ) = u0 (c0 ) + u1 (c1! ):
That is, the consumptions at time 0 and time 1 are independent. In particular,
u0 (c0 ) = u(c0 )
u1 (c1! ) = u(c1! );
11
where 0
1 is the time discount factor, or the time preference.
In this case, we have
P
U (c0 ; c1 ) = Eu(c0 ; c1 ) =
=
P
u(c0 ; c1! )
!
!=1
(u(c0 ) + u(c1! ))
!
!=1
= u(c0 )
P
!
+
!=1
P
u(c1! )
!
!=1
P
= u(c0 ) +
u(c1! )
!:
!=1
Example 2 The Lucas tree economy
%
&
state 1 with prob. 1/2
state 2 with prob. 1/2
t=0
t=1
Consider two consumption bundles as follows:
A:1
%
&
1
and B : 1
1
%
&
3
0
And suppose your utility index is
u(c0 ; c1! ) = u(c0 ) + u(c1! )
where = 1:
That is, your expected utility function is
U (c0 ; c11 ; c12 ) = u(c0 ) +
Further, we assume
u(c) =
12
p
u(c11 ) u(c12 )
+
:
2
2
c:
Thus,
u(1) u(1)
+
= 2u(1) = 2
2
2
"p
u(3) u(0)
3
+
=1+
+
U (B) = u(1) +
2
2
2
p
2+ 3
=
< 2:
2
U (A) = u(1) +
Thus, A
1=2
#
0
2
B:
Type II: Habit formation (special case)
Your utility at time 1 depends not only on time 1’s consumption,
but also on time 0’s consumption. For example, your utility index has the
following form:
u(c0 ; c1! ) = u(c0 ) + u(c1! kc0 )
where k > 0:
Type III: Catching up with the Jones (special case)
Your utility index depends not only on your consumption, but also
on other people’s consumption levels.
For example:
u(c0 ; c1! ) = u(c0
k C0 ) + u(c1!
k C1! );
where C0 and C1! are the public average consumption levels at time 0 and 1
and at state !; respectively.
Type IV: Nonlinear in probability (Extension)
The Allais Paradox: Consider four securities as follows:
(Riskfree) Security A
$0 million with probability 0
%
! $1 millions with probability 1
&
$5 millions with probability 0
13
and
Security B
$0 million with probability 0.01
%
! $1 millions with probability 0.89
&
$5 millions with probability 0.1
Which one do you prefer?
A
B:
Why? You pick A rather than B; presumably because A is riskfree and it
will give you $1M for sure; Though the chance of getting $1M in security B
is 0.89 which is big and also having 0.1 chance to get $5M; but it is possible
it will give you nothing. Since you are risk averse, therefore, you would like
to pick up A rather than B:
$0 million with probability 0.9
Security C
%
! $1 millions with probability 0
&
$5 millions with probability 0.1
and
$0 million with probability 0.89
Security D
%
! $1 millions with probability 0.11
&
$5 millions with probability 0
Which one do you prefer?
C
D:
Why? You pick C rather than D; presumably both C and D are risky, both
have similar chances to give $0M and $1M; but C has some chance to give
$5M; and D has no any chance to give you $5M ; Therefore, you would like
to pick up C rather than D:
From the above discussion, the above behaviors are very rational and
reasonable.
The question is: Are you an expected utility maximizer? In other words,
are the above behaviors consistent with maximizing an expected utility?
14
Suppose Yes. Suppose you have utility index satisfying
u(0) < u(1M ) < u(5M );
then
U (A) = Eu(A) = u(1M )
U (B) = Eu(B) = 0:01u(0) + 0:89u(1M ) + 0:1u(5M ):
From A
B, we have
Eu(A) > Eu(B):
That is,
Eu(A) > Eu(B) ()
u(1M ) > 0:01u(0) + 0:89u(1M ) + 0:1u(5M ) ()
0:11u(1M ) > 0:01u(0) + 0:1u(5M ):
From C
D
U (C) = Eu(C) = 0:9u(0) + 0:1u(5M )
U (D) = Eu(D) = 0:89u(0) + 0:11u(1M )
and
Eu(C) > Eu(D);
we have
Eu(C) > Eu(D) ()
0:9u(0) + 0:1u(5M ) > 0:89u(0) + 0:11u(1M ) ()
0:11u(1M ) < 0:01u(0) + 0:1u(5M ):
That is, we have both
0:11u(1M ) > 0:01u(0) + 0:1u(5M ) from A
0:11u(1M ) < 0:01u(0) + 0:1u(5M ) from C
B and
D:
But this is a contradiction! This means, your utility is not linear in probability.
For the Allais paradox, we have two questions to ask:
15
Question 4 Can you imagine why do we call it a Paradox?
On the one hand, you would think that the EU (expected utility) model
makes sense since it sounds reasonable;
On the other hand, you would have the same choices as that in the Allais
paradox.
But the two things are not consistent. So, this is why we call it paradox!
Question 5 Why do we have the paradox? Where does it come from?
Of course, there are two answers:
Answer 1: The EU model is true, but the people is wrong.
But this explanation does not make sense. If it cannot explain our
behaviors, why do we need economic theory? We want to use it to
explain and predict people’s behaviors. If one economic theory cannot
explain our behavior, no matter how wonderful it looks, it would be
useless and we would not need such a “beautiful’theory.
Answer 2: The people’s behaviors in the Allais Paradox are correct, but the EU model is wrong.
Why is the EU model wrong? Where does it make mistakes?
Please remember the EU model has the following format
U (c0 ; c1 ) = Eu(c0 ; c1 ) =
P
u(c0 ; c1! )
!:
!=1
It is linear in probability ! . Why linear? No particular reasons. The
key to the EU model is that the agent’s behavior should be based on
his the probability since it represents his believes about the states of
world, but not necessary to be linear. For example,
U (c0 ; c1 ) =
P
u(c0 ; c1! )g (
!)
!=1
which uses the transformation of the probability
ability itself.
!,
but not the prob-
Type V: Ambiguity and ambiguity aversion (Extension)
16
Next, we will see another famous paradox.
The Ellsberg Paradox: Consider a black box containing 90 balls (30
red, the rest of 60 may be black or yellow). One ball will be drawn at random
from the box. The security’s payo¤s will depend on the color of the ball to
be drawn. Consider the following four securities:
0
1
0
1
$100 if Red
$0
if Red
if Black A ; B = @ $100 if Black A
A = @ $0
$0
if Yellow
$0
if Yellow
Which one do you prefer?
A
B
Why? You pick A rather than B; presumably because the chance of getting
$100 in security A is precisely known to be 1=3; but the chance of getting
$100 in security B is ambiguous since any number between 0 and 2=3 is
possible.
0
1
0
1
$100 if Red
$0
if Red
if Black A ; D = @ $100 if Black A
C = @ $0
$100 if Yellow
$100 if Yellow
Which one do you prefer?
D
C:
Why? You prefer D to C; because the chance of getting $100 in D is precisely
known to be 2=3; whereas the chance of getting $100 in C is ambiguous since
any number between 1=3 and 1 is possible.
From the above discussion, the above behaviors are very rational and
reasonable.
The question is: Are you an expected utility maximizer? Suppose Yes.
Here we assume
u(100) > u(0) > 0:
From
Eu(A) = u(100)P (R) + u(0)P (B [ Y )
2
1
u(100) + u(0)
=
3
3
Eu(B) = u(100)P (B) + u(0)P (R [ Y )
17
and A
B; we have
Eu(A) > Eu(B) ()
2
1
u(100) + u(0) > u(100)P (B) + u(0)P (R [ Y ) ()
3
3
1
2
u(100)
P (B)
> u(0) P (R [ Y )
:
3
3
From D
C; we have
Eu(C) = u(100)P (R [ Y ) + u(0)P (B)
2
1
u(100) + u(0)
Eu(D) =
3
3
and
1
2
u(100) + u(0) > u(100)P (R [ Y ) + u(0)P (B) ()
3
3
1
2
P (R [ Y )
> u(0) P (B)
:
u(100)
3
3
Since
2
3
P (R [ Y ) =
P (B)
2
3
1
= (1
3
(1
1
3
P (B)) =
P (R [ Y ))
1
=
3
P (B)
2
3
P (R [ Y )
Substituting these into the above,
1
3
1
3
u(100)
u(100)
2
3
u(0) P (R [ Y )
P (B)
>
P (B)
< u(0) P (R [ Y )
()
2
3
Thus, we have both
u(100)
u(100)
1
3
1
3
P (B)
> u(0) P (R [ Y )
P (B)
< u(0) P (R [ Y )
2
3
2
3
from A
B and
from D
C:
But this is a contradiction!
Similarly, for the Ellsberg paradox, we also have two questions to ask:
18
Question 6 Can you imagine why do we call it a Paradox?
On the one hand, you would think that the EU (expected utility) model
makes sense since it sounds reasonable;
On the other hand, you would have the same choices as that in the Ellsberg
paradox.
But the two things are not consistent. So, this is why we call it paradox!
Question 7 Why do we have the paradox? Where does it come from?
Of course, there are two answers:
Answer 1: The EU model is true, but the people is wrong.
But this explanation does not make sense. If it cannot explain our
behaviors, why do we need economic theory? We want to use it to
explain and predict people’s behaviors. If one economic theory cannot
explain our behavior, no matter how wonderful it looks, it would be
useless and we would not need such a “beautiful’theory.
Answer 2: The people’s behaviors in the Ellsberg Paradox are
correct, but the EU model is wrong.
Why is the EU model wrong? Where does it make mistakes? In this
example, there are 8 events in total
ffBg ; fRg ; fY g ; fB; Rg ; fB; Y g ; fR; Y g ; ?; fB; R; Y gg :
For which events, can you attach probabilities?
ffRg ; fB; Y g ; ?; fB; R; Y gg
since
2
1
P (fRg) = ; P (fB; Y g) =
3
3
and
P (?) = 0; P (fB; R; Y g) = 1:
Thus, you can attach operabilities only to some events, but not to all
the relevant events. Moreover, though we can attach probability to
event fB; Y g ; we cannot attach probabilities to events fBg and fY g :
However, the EU model requires us to assign probabilities to all the
events, no matter what information we have. This is why we have the
Ellsberg paradox since we cannot assign probabilities to some events if
we have no detailed objective information!
19
But the existing probability theory is built on the assumption that we
can assign probability to any events. The Ellsberg paradox clearly tells us
this is not true. We do need a new probability theory.
More importantly, the Ellsberg paradox clearly tells the di¤erence between the two terms we used very often in our daily life:
Risk and Ambiguity!
Roughly, say that there is ambiguity in a model, if the agent cannot
attach probability to some events.
This example tells us that due to ambiguity, and ambiguity aversion,
the agent may not be an expected utility maximizer.
Risk means that we don’t know which one state would happen, but we
know that each state ! would happen with probability ! : In other words,
the probabilities are known (one unknown here.)
Ambiguity means not only we don’t know which one state would happen, but also we don’t know the probability ! that state ! would happen
at least for some events. In other words, the probabilities are also not
known. (Two unknowns here.)
Ambiguity and ambiguity aversion are hot research topics now.
For expected utility, we have to stop here and next go back to risk and
risk aversion.
Now we assume
Assumption: All agents are expected utility maximizers.
That is, the agent’s utility function has the following form
U (x) = Eu(x) =
P
u(x! )
!:
!=1
With this extra assumption, how to identify one agent to be risk averse,
neutral or loving?
First, we recall some mathematics on concave function and convex function, respectively.
De…nition 6 (Concave function) Say function u(x) de…ned in interval D is
concave if for any x1 ; x2 2 D and 2 [0; 1]
u( x1 + (1
)x2 )
u(x1 ) + (1
20
)u(x2 ):
In other words, function u(x) de…ned in interval D is concave if for any
two points on the curve, the whole segment connecting the two points is
below the curve. See the following picture:
De…nition 7 (Convex function) Say function u(x) de…ned in interval D is
convex if for any x1 ; x2 2 D and 2 [0; 1]
u( x1 + (1
)x2 )
u(x1 ) + (1
)u(x2 ):
That is, function u(x) de…ned in interval D is convex if for any two points
on the curve, the whole segment connecting the two points is above the curve.
See the following picture:
21
An interesting question is:
For any function f; must it be either convex or concave?
Is that true? No, not necessary. It could be one of the following four
cases:
1. concave;
2. convex;
3. conboth in the sense of being both concave and convex; and
4. connothing in the sense of being neither concave nor convex.
How to check if function u is concave or convex? There are two methods:
Method I: Use the de…nitions.
Please remember we can always use the de…nition to check something
since this the starting point. But, sometimes using the de…nition to
check something is really not easy. So we have to …nd something easy
to use.
22
Method II: If the function is also twice di¤erentiable, we can use
its second order derivative to check if it is concave or not.
For the second method, we have the following useful results:
Theorem 2 Suppose function u (x) is twice di¤erentiable. Then, u is
1. u is concave if u00 (x)
0 for all x in the domain;
2. u is not concave if there exists at least one point x in the domain such
that u00 (x ) > 0;
3. u is convex if u00 (x)
0 for all x in the domain;
4. u is not convex if there exists at least one point x in the domain such
that u00 (x ) < 0;
5. u is both concave and convex if u00 (x) = 0 for all x in the domain;
6. u is neither concave nor convex if there are two points x1 and x2 such
that u00 (x1 ) > 0 and u00 (x2 ) < 0:
Next, we will use the above theorem to check some examples.
Example 3 u (x) = c (constant), x 2 R1 :
From
u0 (x) = 0 and u00 = 0;
u is both concave and convex.
Example 4 u (x) = ax + b (constant), x 2 R1 :
From
u0 (x) = a and u00 = 0;
u is both concave and convex.
Example 5 u (x) = ax2 + bx + c (constant), x 2 R1 :
23
From
u0 (x) = 2ax and u00 = 2a;
thus,
concave if a
convex if a
u is
0
0:
Example 6 u (x) = x3 , x 2 R1 :
From
u0 (x) = 3x2 and u00 (x) = 6x;
there exist two points x1 and x2 such that u00 (x1 ) > 0 and u00 (x1 ) < 0: For
example,
x1 = 1 and x2 = 1
u00 (1) = 6 > 0 and u00 ( 1) =
6 < 0:
Thus, u is neither concave nor convex.
Example 7 Negative exponential function
u (x) =
e
ax
; (a > 0); x 2 R1 :
From
u0 (x) = ae
ax
; u00 (x) =
a2 e
ax
< 0;
u is concave.
Example 8 Power function
u (x) =
1
1
b
x1
b
(x > 0; b
0 and 6= 1):
From
u0 (x) = x b ; u00 (x) =
bx
b 1
u is concave.
Example 9 Logarithmic function
u (x) = ln x; (x > 0).
24
< 0;
From
u0 (x) =
1 00
; u (x) =
x
1
< 0;
x2
u is concave.
Why do we need to study concave functions and convex functions in
…nancial economics? Next is the clue.
Theorem 3 Suppose that an agent’s preference can be expressed by an expected utility
P
U (x) = Eu (x) =
u(x! ) ! :
!=1
Then, the agent is risk
1. averse i¤ the utility index u is concave;
2. loving i¤ the utility index u is convex; and
3. neutral i¤ the utility index u is both convex and concave (linear).
4. nothing i¤ utility index u is neither convex nor concave.
Next, we are going to …nd the risk premium for agent who is an expected
utility maximizer and for two types of risk:
Absolute risk and Relative risk.
De…nition 8 Say risk contained in an risk asset x is
1. absolute (additive) if
x = constant +
with
E ( ) = 0 and var ( ) > 0:
2. relative (percent) if
x = constant (1 + )
with
E ( ) = 0 and var ( ) > 0:
25
Next, we will talk about them one by one.
Absolute risk: Consider a risk asset as follows
x = E (x) + (x
where x is a constant,
E (x)) = x + ;
is a random variable with
E ( ) = 0 and var ( ) = 2 > 0 =)
E (x) = x and var (x) = var ( ) = 2
Such risk is called absolute risk or additive risk. (Can you imagine
why?)
The question is: What is the risk premium
who has expected utility
Eu (x + )?
(x) = (x + ) for an agent
By the de…nition of risk premium (x) ;
x
E (x)
(x)
(x + )
E ((x + ))
((x + ))
Eu (x + ) = Eu (E (x + )
(x + )) = u (E (x + )
(x + )) = u (x
By using Taylor’s expansion around point x, we have
u0 (x) + o ( )
1
u (x + ! ) = u (x) + u0 (x) ! + u00 (x) 2! + o ( ! ) :
2
It is noted here we assume that ! is small for all ! = 1; 2;
; : That is,
the risk is small.
u (x
P
) = u (x)
1
+ u00 (x) 2! ! + o
2
P
1
u (x) ! + u0 (x) ! ! + u00 (x) 2! !
2
!=1
P
P
P
1 00
0
u (x)
! + u (x)
! ! + u (x)
2
!=1
!=1
!=1
1
u (x) + u0 (x) E + u00 (x) E 2
2
1 00
u (x) + u (x) 2
2
u (x +
!)
!
= u (x)
u (x +
!)
!
=
!=1
Eu (x + ) =
=
=
!
+ u0 (x)
26
! !
2
! !
):
Substituting the above into
Eu (x + ) = u (x
);
we have
1
u (x) + u00 (x)
2
2
u0 (x)
= u (x)
1 u00 (x)
2 u0 (x)
(x + ) =
2
=
1 u00 (x)
var (x + ) :
2 u0 (x)
That is, the risk premium is proportional to the variance of the security x+ :
The risk premium has been decomposed into two parts:
u00 (x)
u0 (x)
and
1
var (x + ) :
2
Denote
A (x) =
u00 (x)
u0 (x)
the Arrow-Pratt absolute risk aversion coe¢ cient for agent with utility index
u:
It is noted that:
1. A (x) is subjective since it depends only the agent’s utility index little u;
2. var(x + ) is objective since it depends only the given asset x+ : It roughly
represents the risk of the security x + .
We can use the above decomposition
(x + ) =
1 u00 (x)
var (x + )
2 u0 (x)
to explain how much you should pay for your car insurance premium (x + ):
27
1. Your attitude toward risk described by the subjective part
u00 (x)
u0 (x)
Risk loving, neutral or averse. How old are you? Your education background.
2. The objective risk described by the variance var (x + ). For example,
where do you live? The big city or small city? And the community you
are living, rich region or poor region. What car do you own? Expensive
or cheaper?
Relative risk: Consider an asset x having
x = x (1 + ) ;
where x is constant,
is a random variable with
E ( ) = 0 and var ( ) = 2 > 0
E (x) = x and var (x) = x2 var ( ) = x2
2
:
Such risk is called relative risk or proportional risk. (Can you imagine
why?)
For simplicity, we denote risk premium by x (1 + (x)) instead of (x) ;
x
x (1 + )
Eu (x (1 + )) =
=
E (x) x (1 + (x))
E (x (1 + )) x (1 + (x (1 + )))
Eu (Ex (1 + ) x (1 + (x (1 + ))))
u (Ex (1 + ) x (1 + (x (1 + )))) = u (x (1
Similarly, we ask: What is the risk premium
who has expected utility
Eu (x (1 + ))?
(x (1 + )) for an agent
By de…nition, we have
Eu (x (1 + )) = u (Ex (1 + )
28
)) :
Ex (1 + )) = u (x (1
)) :
By using Taylor’s expansion around point x, we have
u0 (x) x + o (x )
1
u (x + x ! ) = u (x) + u0 (x) x ! + u00 (x) (x ! )2 + o (x ! ) :
2
u (x (1
)) = u (x)
Moreover,
P
1
+ u00 (x) (x ! )2 ! + o (x ! ) + o
2
P
1
u (x) ! + u0 (x) x ! ! + u00 (x) (x ! )2 !
2
!=1
P 2
P
P
1 2 00
0
u (x)
! + u (x) x
! ! + x u (x)
! !
2
!=1
!=1
!=1
1
u (x) + u0 (x) xE + x2 u00 (x) E 2
2
1 2 00
u (x) + x u (x) 2
2
u (x + x ! )
!
= u (x)
u (x + x ! )
!
=
!=1
Eu (x (1 + )) =
=
=
!
+ u0 (x) x
! !
Substituting the above into
Eu (x (1 + )) = u (x (1
)) ;
we have
1
u (x) + x2 u00 (x)
2
2
= u (x)
(x (1 + )) =
u0 (x) x
1 x2 u00 (x)
2 xu0 (x)
2
=
1 xu00 (x)
var ( ) :
2 u0 (x)
Thus, we have
(x (1 + )) =
1 xu00 (x)
2 u0 (x)
Denote
2
=
1 xu00 (x)
var ( )
2 u0 (x)
xu00 (x)
u0 (x)
R (x) =
the Arrow-Pratt relative risk aversion coe¢ cient for agent with utility index
u:
Similarly,
29
1. R (x) is subjective since it depends only the agent’s utility index little u;
2. var( ) is objective since it depends only on the given asset x (1 + ). It
roughly represents the risk of the security .
Next are some important and useful de…nitions:
De…nition 9 Say an agent is
1. increasing absolutely risk averse (denoted by IARA) if he is more and
more risk averse for absolute risk when he becomes richer and richer;
2. decreasing absolutely risk averse (denoted by DARA) if he is less and
less risk averse for absolute risk when he becomes richer and richer;
3. constant absolutely risk averse (denoted by CARA) if he is the same
risk averse for absolute risk when he becomes richer and richer;
4. increasing relatively risk averse (denoted by IRRA) if he is more and
more risk averse for relative risk when he becomes richer and richer;
5. decreasing relatively risk averse (denoted by DRRA) if he is less and
less risk averse for relative risk when he becomes richer and richer;
6. constant relatively risk averse (denoted by CRRA) if he is the same risk
averse for relative risk when he becomes richer and richer;
Moreover, if one agent is an expected utility maximizer, we have
Theorem 4 An agent is
1. IARA i¤ the Arrow-Pratt absolute risk aversion coe¢ cient A (x) is increasing;
2. DARA i¤ the Arrow-Pratt absolute risk aversion coe¢ cient A (x) is
decreasing;
3. CARA i¤ the Arrow-Pratt absolute risk aversion coe¢ cient A (x) is
constant;
4. IRRA i¤ the Arrow-Pratt relative risk aversion coe¢ cient R (x) is increasing;
30
5. DRRA i¤ the Arrow-Pratt relative risk aversion coe¢ cient R (x) is
decreasing;
6. CRRA i¤ the Arrow-Pratt relative risk aversion coe¢ cient R (x) is
constant;
Some widely used examples:
1. Linear utility function
u(x) = ax + b:
From
u0 (x) = a; u00 (x) = 0;
A (x) = R (x) = 0: (CARA and CRRA)
where CARA (IARA, DARA) is constant (increasing/decreasing) absolutely risk aversion and CRRA (IRRA, DRRA) is constant (increasing/decreasing) relatively risk aversion.
2. Negative exponential utility function
u (x) =
e
ax
; (a > 0):
From
u0 (x) = ae ax ; u00 (x) = a2 e ax
u00 (x)
= a (CARA)
A (x) =
u0 (x)
xu00 (x)
R (x) =
= ax (IRRA)
u0 (x)
3. Quadratic utility function
u(x) = x
b 2
x:
2
From
u0 (x) = 1
A (x) =
R (x) =
bx; u00 (x) = b
b
u00 (x)
=
(IARA)
0
u (x)
1 bx
xu00 (x)
bx
=
(IRRA)
0
u (x)
1 bx
31
4. Power utility function
u (x) =
1
1
b
x1
b
(x > 0; b
0 and 6= 1):
From
u0 (x) = x b ; u00 (x) = bx b 1
u00 (x)
bx b 1
b
A (x) =
=
= (DARA)
0
b
u (x)
x
x
00
b 1
xu (x)
xbx
R (x) =
=
= b (CRRA) :
u0 (x)
x b
5. Logarithmic utility function
u (x) = ln x; (x > 0).
From
u0 (x) =
1 00
; u (x) =
x
1
x2
1
x
1
x2
A (x) =
u00 (x)
=
u0 (x)
R (x) =
x x12
xu00 (x)
= 1 = 1 (CRRA) :
u0 (x)
x
=
1
(DARA)
x
Based on common sense, most of normal agents should be DARA and
IRRA or CRRA, and this is why the last two types of utility index: Power
utility function and Logarithmic utility function are commonly used in Finance.
4
First Order Risk Aversion
So far, we assume that the utility from $1000 is always
u ($1000)
no matter whether the 1000 dollars is what you have gained or what you
have lost. Is that true? Of course, it is not. Gaining the $1000 more dollars
32
just makes you having a little better life, but losing $1000 makes you very
unhappy. That is, although it is just $1000 in both cases, but you have
much di¤erent feelings. How to describe the di¤erences between gaining and
losing?
Behavior Economics!
Mathematically speaking, all the examples we have talked in which the
utility functions are di¤erentiable. Then, for small risk, we have derived
(x + ) =
1 u00 (x)
var (x + )
2 u0 (x)
for absolute risk and
(x (1 + )) =
1 xu00 (x)
var ( )
2 u0 (x)
for relative risk. That is, the risk premium is proportional to the variance of
the risk.
A natural question is:
Question 8 Are the above still true if the utility index u is not di¤erentiable?
Under the vNM expected utility theory, the utility function is de…ned over
actual payo¤ outcomes. Tversky and Kahneman (1992) and Kahneman and Tversky (1979) propose formulations whereby preferences
are de…ned, not over actual payo¤s, but rather over gains and losses
relative to some benchmark, so that losses are given the greater
utility weight. The benchmark can be thought of as either a minimally
acceptable payment or, under the proper transformations, a cuto¤ rate of
return. It can be changing through time re‡ecting prior experience. Their
development is called prospect theory.
A simple illustration of this sort of representation is as follows: Let w
denote the benchmark payo¤, and de…ne the investor’s utility function u(x)
by
a (x w) if x w;
u (x) =
b (x w) if x < w;
where
b>1>a>0
and with certain endowment w > 0:
33
It is noted that b captures the extent of the investor’s aversion to “losses”
relative to the benchmark w, and a captures the extent of the investor’s
aversion to “gains”relative to the benchmark w.
b > 1 > a > 0:
The utility index u is concave for x w and convex for x w:
Here b > 1 > a > 0 represents the loss aversion.
Obviously, u is not di¤erentiable at point w since it is not continuous at
point w. However, the two direction derivatives exist since
u (x)
!w+
x
u0+ (w) = lim
x
a (x w)
u (w)
= lim
x !w+
w
x w
and
0
=a
u (x) u (w)
b (x w) 0
= lim
= b:
x !w
x !w+
x w
x w
Also suppose the agent has the following small absolute risk
u0 (w) = lim
=
with prob. 21 ;
with prob. 12 ;
34
where > 0 is a small number.
It is noted
E = 0 and
var ( ) = E (
E )2 = E ( )2 =
2
:
Equivalently, the agent has a risk asset
w+ =
with prob. 12 ;
with prob. 12 :
w+
w
If he wants to buy an insurance to avoid taking the risk, what is the maximum
risk premium he would like to pay? That is, (w + ) =?
By the de…nition, (w + ) is the solution to the following equation
w+
E (w + )
(w + ) ()
U (w + ) = Eu (w + ) = U (E (w + )
Eu (w + ) = u (E (w + )
(w + )) :
(w + )) = u (E (w + )
(w + ))
Next we calculate them one by one:
u (w + ) u (w
+
2
2
b
a b
a
=
=
2
2
2
Eu (w + ) =
)
=
a (w +
2
w)
+
b (w
w)
2
and
w+
w
+
=w
2
2
(w + ) = w
(w + ) < w
E (w + ) =
E (w + )
since (w + ) > 0:
Accordingly,
u (E (w + )
(w + )) = u (w
(w + )) = b (w
=
b (w + ) :
Finally,
Eu (w + ) = u (E (w + )
a b
=
b (w + ) :
2
35
(w + ))
(w + )
w)
Thus,
1b a
1 b ap
=
var (w + )
2 b
2 b
which is much di¤erent from
(w + ) =
(w + ) =
1 u00 (w)
var (w + ) =
2 u0 (w)
1 u00 (w)
2 u0 (w)
2
when his utility index u is di¤erentiable at w.
In particularly, when is a small number, 2 is much smaller than :
Thus, we can say: The agent is the second order risk averse when his utility
index u is second order di¤erentiable since the risk premium (w + ) he
would like to pay is proportional to the variance of the (absolute) risk. In
this case, the risk premium is not big; However, the agent becomes the …rst
order risk averse when his utility index u is is not di¤erentiable since the risk
premium (w + ) he would like to pay is not proportional to the variance
of the (absolute) risk, but proportional to the square root of the variance of
the (absolute) risk. In this case, the risk premium is much bigger than that
of the second order risk aversion.
It turned out that this was just the tip of iceberg: There are lots of new
facts demonstrating that the agents do not or not exactly follow the classical
models in reality. And this has led to a new branch of …nancial economics:
Behavior Finance! In 2002, Daniel Kahneman and Vernon L. Smith got
Nobel prize in economics for their important contribution in Behavior Finance. However, there is still a long way to go for getting some interesting
theoretical and satisfying results.
Practice Problems
1. Consider a two period, 0 and 1, and
states of the world economy
in which there is a risky asset x available at time 0 and its payo¤s at
time 1 are also denoted by x: Suppose one agent with utility function
U (c0 ; c1 ) has consumption good endowment (w0 ; w1 ) 2 R1+
++ and y
units of asset endowment, respectively.
(a) Let y = 1: Suppose that the agent is going to sell the asset at time
0. What is the minimum price CE(x) he is going to ask?
36
(b) Let y = 0: Suppose that the agent is going to buy one unite of the
asset at time 0. What is the maximum price P (x) he is going to
bid?
(c) Let y = 1: At time 0, suppose that the agent is going to buy
insurance for protecting the loss at some situations. What is the
maximum risk premium (x) he is going to pay?
(Hint: For each of the questions, just specify the equation
that the price should satisfy.)
37