Hawk/Dove game with risk attitudes
This is a variant of the Hawk/Dove Game: https://en.wikipedia.org/wiki/Chicken_(game)
H | D | |
---|---|---|
H | 0, 0 | 3, 1 |
D | 1, 3 | 2, 2 |
BACKGROUND: An unpublished paper by Simon Blessenohl shows that the equilibrium in this game is different for EU maximizers than for REU maximizers (all with the same risk-attitude), and that REU maximizers do better as a population (basically, play DOVE more often)
We want to know: what happens when different people have different risk-attitudes. (See also variant simulation Hawk/Dove game with multiple risk attitudes. )
GAME: Hawk-Dove with risk-attitudes
Players arranged on a lattice [options for both 4 neighbors (AYBD) and 8 neighbors (XYZABCDE)]
X | Y | Z |
A | I | B |
C | D | E |
- Payoffs are determined as follows:
- Look at what each neighbor did, then:
- If I play HAWK and neighbor plays DOVE: 3
- If I play DOVE and neighbor plays DOVE: 2
- If I play DOVE and neighbor plays HAWK: 1
- If I play HAWK and neighbor plays HAWK: 0
Each player on a lattice (grid in Mesa):
- Has parameter
$r$ [from 0 to 9] - Let
d
be the number of neighbors who played DOVE during the previous round. If$d >= r$ , then play HAWK. Otherwise play DOVE. (Agents who are risk-avoidant only play HAWK if there are a lot of doves around them. More risk-avoidance requires a higher number of doves to get an agent to play HAWK.) - The proportion of neighbors who play DOVE corresponds to your probability of encountering a DOVE when playing a randomly-selected neighbor. The intended interpretation is that you maximize REU for this probability of your opponent playing DOVE. Thus,
$r$ corresponds to the probability above which playing HAWK maximizes REU.- Choice of play for the first round:
- Who is a HAWK and who is a DOVE is randomly determined; proportion set at the beginning of each simulation. E.g. 30% are HAWKS; if we have 100 players, then each player has a 30% chance of being HAWK;
- This initial parameter is called HAWK-ODDS; default is 50/50
- Choice of play for the first round:
This game has a discrete set of options instead of probability, so instead of defining r
as a value between 0.0 and 1.0, we use discrete values based on the choices. For the game that includes diagonal neighbors when agents play all neighbors:
r | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
---|---|---|---|---|---|---|---|---|---|---|
Plays H when: | always |
|
|
|
|
|
|
|
|
never |
risk seeking | EU maximizer (risk neutral) |
EU maximizer (risk neutral) |
risk avoidant |
An REU maximizer will play HAWK when
In other words, when
Payoffs were chosen to avoid the case in which two choices had equal expected utility for some number of neighbors. For example, if the payoff of
Another way to visualize the risk attitudes and choices in this game is this table, which shows when agents will play Hawk or Dove based on their risk attitudes (going down on the left side) and the number of neighbors playing Dove (across the top).
# of neighors playing DOVE | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
r | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
risk seeking | 0 | H | H | H | H | H | H | H | H | H |
1 | D | H | H | H | H | H | H | H | H | |
2 | D | D | H | H | H | H | H | H | H | |
3 | D | D | D | H | H | H | H | H | H | |
neutral | 4 | D | D | D | D | H | H | H | H | H |
5 | D | D | D | D | D | H | H | H | H | |
risk avoidant | 6 | D | D | D | D | D | D | H | H | H |
7 | D | D | D | D | D | D | D | H | H | |
8 | D | D | D | D | D | D | D | D | H | |
9 | D | D | D | D | D | D | D | D | D |
The model is configured to stop automatically when it has stabilized. Convergence is based on a stable rolling average of the percent of agents in the simulation playing hawk.
A rolling average of the percent of agents playing hawk is calculated every round based on the percent for the last 30 rounds. The rolling average is not calculated until after at least 15 rounds.
When we have collected the rolling average for at least 15 rounds and the last 30 rolling averages are the same when rounded to 2 percentage points, we consider the simulation converged.