{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Sequential Extensions of Causal and Evidential Decision Theory\n",
"\n",
"Written by [Jan Leike](http://jan.leike.name/) and [Tom Everitt](http://www.tomeveritt.se/).\n",
"\n",
"## About this notebook\n",
"\n",
"The code is written in Python 3, but should be compatible with Python 2.\n",
"\n",
"The code can be used to calculate the value estimates of the decision theories discussed in our paper [Sequential Extensions of Causal and Evidential Decision Theory](http://users.cecs.anu.edu.au/~leike/publications/Sequential Extensions of Causal and Evidential Decision Theory - Everitt, Leike, Hutter 2015.pdf). Generally, the implemented value functions take as input a distribution defining the environmental probabilities, an action/policy, lists of possible percepts and hidden states, and a utility function. The implementation aims to follow the formulas in the paper as closely as possible. See the code documentation for details.\n",
"\n",
"The results of the calculations are displayed below the code boxes. If you wish to re-run some code (with or without changes), please first run all earlier code boxes in order, as functions in later code boxes depend on earlier code boxes having been executed at least once. \n",
"\n",
"To run a code box, press Shift+Enter.\n",
"\n",
"\n",
"## One-Shot Decision Making\n",
"\n",
"In a _decision problem_,\n",
"we take one action $a \\in \\mathcal{A}$, receive a percept $e \\in \\mathcal{E}$\n",
"(typically called _outcome_ in the decision theory literature)\n",
"and get a payoff according to the utility function\n",
"$u: \\mathcal{E} \\to [0, 1]$.\n",
"We assume that the set of actions $\\mathcal{A}$ and the set of percepts $\\mathcal{E}$ are finite.\n",
"Additionally,\n",
"the environment contains a _hidden state_ $s \\in \\mathcal{S}$.\n",
"The hidden state holds information that is inaccessible to the agent\n",
"at the time of the decision,\n",
"but may influence the decision and the percept.\n",
"Formally, the environment is given by\n",
"a probability distribution $P$ over the hidden state, the action and the percept\n",
"that factors according to a _causal graph_ (Pearl, 2009).\n",
"\n",
"### Causal Graphs\n",
"\n",
"A causal graph over the random variables $x_1,\\dots,x_n$ is\n",
"a directed acyclic graph with nodes $x_1,\\dots,x_n$ and\n",
"probability distributions $P(x_i\\mid pa_i)$ for each node $x_i$\n",
"where $pa_i$ is the set of parents of $x_i$ in the graph.\n",
"It is natural to identify the causal graph with \n",
"the factored distribution $P(x_1,\\dots, x_n) = \\prod_{i=1}^n P(x_i\\mid pa_i)$.\n",
"Given such a causal graph, \n",
"the $\\texttt{do}$-operator is defined as\n",
"$$\n",
" P(x_1, \\ldots, x_j, \\ldots, x_n \\mid \\texttt{do}(x_j := b))\n",
"= \\frac{\\prod_{i=1}^n P(x_i \\mid pa_i)}{P(x_j \\mid pa_j)}\n",
"$$\n",
"if $x_j = b$ on the left hand side and $0$ otherwise.\n",
"The result is a new probability distribution\n",
"that can be marginalized and conditioned in the standard way.\n",
"Intuitively, intervening on node $x_j$ means\n",
"ignoring all incoming arrows to $x_j$, as the effects they represent\n",
"are no longer relevant when we intervene;\n",
"the division removes the $P(x_j \\mid pa_j)$ factor from $P(x_1,\\dots,x_n)$.\n",
"Note that the $\\texttt{do}$-operator is only defined for distributions\n",
"for which a causal graph has been specified.\n",
"See (Pearl, 2009, Ch. 3.4) for details.\n",
"\n",
"In the first two (one-shot) examples, the environment is given as\n",
"the following causal graph over the hidden state $s$, action $a$, and percept $e$.\n",
"\n",
"![im1](one-shot.png)\n",
"\n",
"According to this causal graph,\n",
"the probability distribution $P$ factors causally into\n",
"$P(s, a, e) = P(s) P(a \\mid s) P(e \\mid s, a)$.\n",
"\n",
"The environment has a prior belief over the agent's actions,\n",
"but that prior does not have to assign high probability to the\n",
"actions that the agent actually ends up taking.\n",
"We think of this prior as _partial information_ about the agent\n",
"or in multi-agent systems beliefs held by other agents.\n",
"Since we assume the agent's policy to be deterministic and inside the environment,\n",
"the environment could actually know all the agent's actions in advance.\n",
"However, for self-consistency reasons, the agent needs to be\n",
"uncertain about what the environment knows about it:\n",
"if the agent knew which action she will take,\n",
"she could decide to take different actions which leads to a paradox."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Imports and Helper functions\n",
"from fractions import Fraction # exact arithmetic with fractions\n",
"\n",
"def distribution(d):\n",
" \"\"\"\n",
" Return a probability distribution\n",
" corresponding the probability mu(a | b)\n",
" \n",
" Input: a dictionary specifying the distribution\n",
" Output: a function that queries the dictionary and returns 0 if nothing is found\n",
" \"\"\"\n",
" def mu(a, b=''):\n",
" #print(\"evaluating \",a,b)\n",
" if b == '':\n",
" return d[a] if a in d else 0\n",
" else:\n",
" return d[(a, b)] if (a, b) in d else 0\n",
" return mu"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Value Functions\n",
"\n",
"One-shot evidential value function:\n",
"$$\n",
" V^{\\mathrm{evi},a}_{\\mu,1}\n",
":= \\sum_{e \\in \\mathcal{E}} \\mu(e \\mid a) u(e)\n",
" = \\sum_{e \\in \\mathcal{E}} \\sum_{s \\in \\mathcal{S}} \\mu(e \\mid s, a) \\mu(s \\mid a) u(e)\n",
"$$\n",
"One-shot causal value function:\n",
"$$\n",
" V^{\\mathrm{cau},a}_{\\mu,1}\n",
":= \\sum_{e \\in \\mathcal{E}} \\mu(e \\mid \\texttt{do}(a)) u(e)\n",
" = \\sum_{e \\in \\mathcal{E}} \\sum_{s \\in \\mathcal{S}} \\mu(e \\mid s, a) \\mu(s) u(e)\n",
"$$"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def evidential_value(mu, a, E, S, u):\n",
" \"\"\"\n",
" Evidential value of action a in environment mu\n",
" with possible percepts E, hidden states S, and utility function u.\n",
" \"\"\"\n",
" mu_a = sum([mu(a, s)*mu(s) for s in S]) # mu(a)\n",
" return sum([u[e]*mu(e, s + a)*mu(s)*mu(a, s)/mu_a for e in E for s in S])\n",
"\n",
"def causal_value(mu, a, E, S, u):\n",
" \"\"\"\n",
" Causal value of action a in environment mu\n",
" with possible percepts E, hidden states S, and utility function u\n",
" \"\"\"\n",
" return sum([u[e]*mu(e, s + a)*mu(s) for e in E for s in S])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Newcomb's Problem\n",
"\n",
"In [Newcomb's Problem](http://en.wikipedia.org/wiki/Newcomb%27s_paradox) there are two boxes:\n",
"an opaque box that is either empty or contains one million dollars and\n",
"a transparent box that contains one thousand dollars.\n",
"The agent can choose between taking only the opaque box (\"one-boxing\")\n",
"and taking both boxes (\"two-boxing\").\n",
"The content of the opaque box is determined by a very reliable predictor:\n",
"If it predicts the agent will one-box, the box contains the million, and\n",
"if it predicts the agent will two-box, the box is empty.\n",
"\n",
"In Newcomb's problem\n",
"EDT prescribes to one-box because one-boxing is evidence that\n",
"the box contains a million dollars.\n",
"In contrast, CDT prescribes to two-box because two-boxing dominates one-boxing:\n",
"in either case we are a thousand dollars richer,\n",
"and our decision cannot causally affect the prediction.\n",
"Newcomb's problem has been raised as a critique to CDT,\n",
"but many philosophers insist that two-boxing is in fact\n",
"the rational choice, even if it makes you end up poor."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Evidential value of one-boxing: $990000\n",
"Evidential value of two-boxing: $11000\n",
"Causal value of one-boxing: $500000\n",
"Causal value of two-boxing: $501000\n"
]
}
],
"source": [
"# Newcomb's Problem\n",
"S = {'E', 'F'} # E = empty, F = full\n",
"A = {'1', '2'} # 1 = one-box, 2 = two-box\n",
"E = {'0', 'T', 'M', 'A'} # 0 = no money, T = $1000, M = $1000000, A = $1001000 (all of the money)\n",
"u = {'0': 0, 'T': 1000, 'M': 1000000, 'A': 1001000} # Utilities\n",
"eps = Fraction('0.01') # Prediction inaccuracy \n",
"mu = distribution({\n",
" 'E': Fraction('0.5'), # Prob. box empty\n",
" 'F': Fraction('0.5'), # Prob. box full\n",
" ('1', 'F'): 1 - eps, # Prob. agent one-boxes given box full\n",
" ('1', 'E'): eps, # Prob. agent one-boxes given box empty\n",
" ('2', 'F'): eps, # Prob. agent two-boxes given box full\n",
" ('2', 'E'): 1 - eps, # Prob. agent one-boxes given box full\n",
" ('0', 'E1'): 1, # Prob. receiving $0 given empty and one-boxing\n",
" ('T', 'E2'): 1, # Prob. receiving $T given empty and two-boxing\n",
" ('M', 'F1'): 1, # Prob. receiving $M given full and one-boxing \n",
" ('A', 'F2'): 1, # Prob. receiving $M+$T given full and two-boxing\n",
"}) # Unspecified conditionals default to 0\n",
"\n",
"print('Evidential value of one-boxing: ${:}'.format(evidential_value(mu, '1', E, S, u)))\n",
"print('Evidential value of two-boxing: ${:}'.format(evidential_value(mu, '2', E, S, u)))\n",
"print('Causal value of one-boxing: ${:}'.format(causal_value(mu, '1', E, S, u)))\n",
"print('Causal value of two-boxing: ${:}'.format(causal_value(mu, '2', E, S, u)))\n",
"\n",
"assert evidential_value(mu, '1', E, S, u) > evidential_value(mu, '2', E, S, u) # EDT prefers 1\n",
"assert causal_value(mu, '1', E, S, u) < causal_value(mu, '2', E, S, u) # CDT prefers 2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Toxoplasmosis Problem\n",
"\n",
"This problem takes place in a world in which there is a certain parasite that\n",
"cause their host to be attracted to cats,\n",
"in addition to uncomfortable sideeffects.\n",
"The agent is handed an adorable little kitten and\n",
"is faced with the decision of whether or not to pet it.\n",
"Petting the kitten feels nice and\n",
"therefore yields more utility than not petting it.\n",
"However, people suffering from the parasite are more likely to pet the kitten.\n",
"\n",
"Petting the kitten constitutes evidence\n",
"of the presence of the parasite, and thus EDT recommends against it.\n",
"CDT correctly observes that there is no causal connection between\n",
"petting the kitten and having the parasite,\n",
"and is therefore in favor of petting."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Evidential value of petting: -7\n",
"Evidential value of not petting: -2\n",
"Causal value of petting: -4\n",
"Causal value of not petting: -5\n"
]
}
],
"source": [
"# Toxoplasmosis Problem\n",
"S = {'T', 'H'} # T = toxoplasmosis, H = healthy\n",
"A = {'P', 'N'} # P = petting, N = not petting\n",
"E = {'0', '1', '8', '9'} # 0 = sick, not petting; 1 = sick, petting; 8 = healthy, not petting; 9 = healthy, petting \n",
"u = {'0': -10, '1': -9, '8': 0, '9': 1} # Utilities\n",
"mu = distribution({\n",
" 'T': Fraction('0.5'),\n",
" 'H': Fraction('0.5'),\n",
" ('P', 'T'): Fraction('0.8'),\n",
" ('P', 'H'): Fraction('0.2'),\n",
" ('N', 'T'): Fraction('0.2'),\n",
" ('N', 'H'): Fraction('0.8'),\n",
" ('0', 'TN'): 1,\n",
" ('1', 'TP'): 1,\n",
" ('8', 'HN'): 1,\n",
" ('9', 'HP'): 1,\n",
"}) # Unspecified conditionals default to 0\n",
"\n",
"print('Evidential value of petting: {:}'.format(evidential_value(mu, 'P', E, S, u)))\n",
"print('Evidential value of not petting: {:}'.format(evidential_value(mu, 'N', E, S, u)))\n",
"print('Causal value of petting: {:}'.format(causal_value(mu, 'P', E, S, u)))\n",
"print('Causal value of not petting: {:}'.format(causal_value(mu, 'N', E, S, u)))\n",
"\n",
"assert evidential_value(mu, 'P', E, S, u) < evidential_value(mu, 'N', E, S, u) # EDT prefers N\n",
"assert causal_value(mu, 'P', E, S, u) > causal_value(mu, 'N', E, S, u) # CDT prefers P"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sequential Decisions\n",
"\n",
"For the remainder of this notebook,\n",
"we consider an environment $\\mu$ that\n",
"the agent interacts with sequentially:\n",
"at time step $t$ the agent chooses an _action_ $a_t \\in \\mathcal{A}$ and\n",
"receives a _percept_ $e_t \\in \\mathcal{E}$\n",
"which yields a _utility_ of $u(e_t) \\in \\mathbb{R}$;\n",
"the cycle then repeats for $t + 1$.\n",
"A _history_ is an element of $(\\mathcal{A} \\times \\mathcal{E})^*$.\n",
"We use $æ \\in \\mathcal{A} \\times \\mathcal{E}$ to denote one interaction cycle,\n",
"and $æ_{ 0])\n",
"\n",
"def spedt_value(mu, pi, E, S, u):\n",
" \"\"\"\n",
" Policy-evidential value of policy pi in environment mu with horizon 2\n",
" with possible percepts E, hidden states S, and utility function u.\n",
" \"\"\"\n",
" a1 = pi['']\n",
" # mu(e_1 | pi) = \\sum_s mu(s | pi)*mu(e_1 | s, pi)\n",
" mu_e1_pi = lambda e1: sum([mu(s)*mu(a1, s)*mu(e1, s + a1)*mu(pi[a1 + e1], s + a1 + e1)\n",
" for s in S if mu(e1, s + a1) > 0]) / \\\n",
" sum([mu(s)*mu(a1, s)*mu(e, s + a1)*mu(pi[a1 + e], s + a1 + e)\n",
" for s in S for e in E if mu(e, s + a1) > 0])\n",
" mu_aea = lambda e1: sum([mu(s)*mu(a1, s)*mu(e1, s + a1)*mu(pi[a1 + e1], s + a1 + e1) for s in S]) # mu(a1 e1 a2)\n",
" return sum([mu_e1_pi(e1)*(u[e1] + sum([mu(s)*mu(a1, s)*mu(e1, s + a1)\n",
" *mu(pi[a1 + e1], s + a1 + e1)\n",
" *mu(e2, s + a1 + e1 + pi[a1 + e1])\n",
" /mu_aea(e1)*u[e2]\n",
" for e2 in E for s in S if mu(e1, s + a1) > 0\n",
" ])) for e1 in E])\n",
"\n",
"def scdt_value(mu, pi, E, S, u):\n",
" \"\"\"\n",
" Causal value of policy pi in environment mu\n",
" with possible percepts E, hidden states S, and utility function u.\n",
" \"\"\"\n",
" a1 = pi['']\n",
" mu_ae = lambda e1: sum([mu(a1, s)*mu(e1, s + a1)*mu(s) for s in S]) # mu(a1e1)\n",
" return sum([mu(e1, s + a1)*mu(s)*(u[e1] + sum([\n",
" mu(e2, s + a1 + e1 + pi[a1 + e1])*mu(s)*mu(a1, s)*mu(e1, s + a1)/mu_ae(e1)*u[e2]\n",
" for e2 in E for s in S\n",
" ])) for e1 in E for s in S if mu(e1, s + a1) > 0])\n",
"\n",
"def print_policy_values(policies):\n",
" \"\"\"Prints SAEDT, SPEDT, and SCDT values for a list of the form (policy, name)\"\"\"\n",
" for value, vname in [(saedt_value, 'Action-evidential'),\n",
" (spedt_value, 'Policy-evidential'),\n",
" (scdt_value, 'Causal')]:\n",
" print('======================================')\n",
" print('{} value'.format(vname))\n",
" values = {pname: value(mu, pi, E, S, u) for pi, pname in policies}\n",
" m = max(values.values())\n",
" for pi, pname in policies:\n",
" print('{:<20}: {:}{:}'.format(pname, values[pname],\n",
" ' BEST' if values[pname] == m else ''))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Newcomb with Looking\n",
"\n",
"In this variation to Newcomb's problem\n",
"the agent may look into the opaque box before making the decision\n",
"which box to take.\n",
"A SCDT agent is indifferent towards looking because\n",
"she will take both boxes anyways.\n",
"However, an SAEDT or SPEDT agent will avoid looking into the box,\n",
"because once the content is revealed he two-boxes.\n",
"\n",
"![Game tree for Newcomb with Looking](Newcomb-with-looking.png)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"======================================\n",
"Action-evidential value\n",
"Curious one-boxer : 500000\n",
"Curious two-boxer : 501000\n",
"Fatalistic : 500500\n",
"Paradox-lover : 500500\n",
"Incurious one-boxer : 990000 BEST\n",
"Incurious two-boxer : 11000\n",
"======================================\n",
"Policy-evidential value\n",
"Curious one-boxer : 990000 BEST\n",
"Curious two-boxer : 11000\n",
"Fatalistic : 500500\n",
"Paradox-lover : 500500\n",
"Incurious one-boxer : 990000 BEST\n",
"Incurious two-boxer : 11000\n",
"======================================\n",
"Causal value\n",
"Curious one-boxer : 500000\n",
"Curious two-boxer : 501000 BEST\n",
"Fatalistic : 500500\n",
"Paradox-lover : 500500\n",
"Incurious one-boxer : 500000\n",
"Incurious two-boxer : 501000 BEST\n"
]
}
],
"source": [
"# Newcomb with Looking\n",
"S = {'E', 'F'} # E = empty, F = full\n",
"A = {'1', '2'} # 1 = looking, 2 = not looking, 1 = one-box, 2 = two-box\n",
"E = {'E', 'F', '0', 'T', 'M', 'A'} # E = empty, F = full, 0 = no money,\n",
" # T = $1000, M = $1000000, A = $1001000 (all of the money)\n",
"u = {'E': 0, 'F': 0, '0': 0, 'T': 1000, 'M': 1000000, 'A': 1001000}\n",
"eps = Fraction('0.01') # prediction accuracy\n",
"mu = distribution({\n",
" # Hidden state\n",
" 'E': Fraction('0.5'),\n",
" 'F': Fraction('0.5'),\n",
" # First action: looking vs. not looking\n",
" ('1', 'F'): Fraction('0.5'),\n",
" ('1', 'E'): Fraction('0.5'),\n",
" ('2', 'F'): Fraction('0.5'),\n",
" ('2', 'E'): Fraction('0.5'),\n",
" # First percept: box content if looking else 0\n",
" ('E', 'E1'): 1,\n",
" ('0', 'E2'): 1,\n",
" ('F', 'F1'): 1,\n",
" ('0', 'F2'): 1,\n",
" # Second action: one-boxing vs. two-boxing\n",
" ('1', 'E1E'): eps,\n",
" ('2', 'E1E'): 1 - eps,\n",
" ('1', 'E20'): eps,\n",
" ('2', 'E20'): 1 - eps,\n",
" ('1', 'F1F'): 1 - eps,\n",
" ('2', 'F1F'): eps,\n",
" ('1', 'F20'): 1 - eps,\n",
" ('2', 'F20'): eps,\n",
" # Second percept: box contents\n",
" ('0', 'E1E1'): 1,\n",
" ('0', 'E201'): 1,\n",
" ('T', 'E1E2'): 1,\n",
" ('T', 'E202'): 1,\n",
" ('M', 'F1F1'): 1,\n",
" ('M', 'F201'): 1,\n",
" ('A', 'F1F2'): 1,\n",
" ('A', 'F202'): 1,\n",
"}) \n",
"\n",
"# There are six possible policies:\n",
"policies = [\n",
" ({'': '1', '1E': '1', '1F': '1'}, 'Curious one-boxer'),\n",
" ({'': '1', '1E': '2', '1F': '2'}, 'Curious two-boxer'),\n",
" ({'': '1', '1E': '2', '1F': '1'}, 'Fatalistic'),\n",
" ({'': '1', '1E': '1', '1F': '2'}, 'Paradox-lover'),\n",
" ({'': '2', '20': '1'}, 'Incurious one-boxer'),\n",
" ({'': '2', '20': '2'}, 'Incurious two-boxer')\n",
"]\n",
"\n",
"print_policy_values(policies)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"While the curious one-boxer is an optimal strategy for SPEDT,\n",
"it is not _time consistent_: Once the box content is reveiled,\n",
"both evidential decision theories want to two-box."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Evidential value of one-boxing after seeing the box empty: $0\n",
"Evidential value of two-boxing after seeing the box empty: $1000\n",
"Evidential value of one-boxing after seeing the box full: $1000000\n",
"Evidential value of two-boxing after seeing the box full: $1001000\n"
]
}
],
"source": [
"def after_seeing(box_content):\n",
" \"\"\"Generate the conditional distribution for the second step,\n",
" after the box content has been reveiled\"\"\"\n",
" def distr(a, b=''):\n",
" if b == '':\n",
" return mu(a)\n",
" else:\n",
" return mu(a, b[0] + '1' + box_content + b[1:])\n",
" return distr\n",
"\n",
"print('Evidential value of one-boxing after seeing the box empty: ${:}'.format(\n",
" evidential_value(after_seeing('E'), '1', E, S, u)))\n",
"print('Evidential value of two-boxing after seeing the box empty: ${:}'.format(\n",
" evidential_value(after_seeing('E'), '2', E, S, u)))\n",
"print('Evidential value of one-boxing after seeing the box full: ${:}'.format(\n",
" evidential_value(after_seeing('F'), '1', E, S, u)))\n",
"print('Evidential value of two-boxing after seeing the box full: ${:}'.format(\n",
" evidential_value(after_seeing('F'), '2', E, S, u)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Newcomb with Precommitment\n",
"\n",
"In this variation to Newcomb's problem\n",
"the agent first has the option to pay \\$300,000 to sign a contract that\n",
"binds the agent to pay \\$2000 in case of two-boxing.\n",
"An SAEDT or SPEDT agent knows that he will one-box anyways\n",
"and hence has no need for the contract.\n",
"A SCDT agent knows that she favors two-boxing,\n",
"but signs the contract only if this occurs before the prediction is made\n",
"(so it has a chance of causally affecting the prediction).\n",
"With the contract in place, one-boxing is the dominant action,\n",
"and thus the SCDT agent is predicted to one-box.\n",
"\n",
"![Game tree for Newcomb with Precommitment](Newcomb-with-precommitment.png)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"======================================\n",
"Action-evidential value\n",
"Signing one-boxer : 700000\n",
"Signing two-boxer : 699000\n",
"Refusing one-boxer : 990000 BEST\n",
"Refusing two-boxer : 11000\n",
"======================================\n",
"Policy-evidential value\n",
"Signing one-boxer : 700000\n",
"Signing two-boxer : 699000\n",
"Refusing one-boxer : 990000 BEST\n",
"Refusing two-boxer : 11000\n",
"======================================\n",
"Causal value\n",
"Signing one-boxer : 700000 BEST\n",
"Signing two-boxer : 699000\n",
"Refusing one-boxer : 500000\n",
"Refusing two-boxer : 501000\n",
"======================================\n",
"Evidential value of one-boxing after signing: $1000000\n",
"Evidential value of two-boxing after signing: $999000\n",
"Causal value of one-boxing after signing: $1000000\n",
"Causal value of two-boxing after signing: $999000\n"
]
}
],
"source": [
"# Newcomb with Precommitment\n",
"S = {'E', 'F'} # E = empty, F = full\n",
"A = {'1', '2'} # 1 = signing, 2 = not signing, 1 = one-box, 2 = two-box\n",
"E = {'C', '0', 'T', 'M', 'A', 'R'}\n",
" # C = -$300000 for the contract, T = -$1000, M = $1000000, A = $1001000 (all of the money),\n",
" # R = $999000 rich while violating the contract\n",
"u = {'C': -300000, '0': 0, 'T': 1000, 'M': 1000000, 'A': 1001000, 'R': 999000}\n",
"eps = Fraction('0.01') # prediction accuracy\n",
"mu = distribution({\n",
" # Hidden state\n",
" 'E': Fraction('0.5'),\n",
" 'F': Fraction('0.5'),\n",
" # First action: signing vs. not signing\n",
" ('1', 'F'): Fraction('0.5'),\n",
" ('1', 'E'): Fraction('0.5'),\n",
" ('2', 'F'): Fraction('0.5'),\n",
" ('2', 'E'): Fraction('0.5'),\n",
" # First percept: C if signing the contract else 0\n",
" ('C', 'E1'): 1,\n",
" ('0', 'E2'): 1,\n",
" ('C', 'F1'): 1,\n",
" ('0', 'F2'): 1,\n",
" # Second action: one-boxing vs. two-boxing\n",
" ('1', 'E1C'): eps,\n",
" ('2', 'E1C'): 1 - eps,\n",
" ('1', 'E20'): eps,\n",
" ('2', 'E20'): 1 - eps,\n",
" ('1', 'F1C'): 1 - eps,\n",
" ('2', 'F1C'): eps,\n",
" ('1', 'F20'): 1 - eps,\n",
" ('2', 'F20'): eps,\n",
" # Second percept: box contents minus contract cost\n",
" ('M', 'E1C1'): 1,\n",
" ('0', 'E201'): 1,\n",
" ('R', 'E1C2'): 1,\n",
" ('T', 'E202'): 1,\n",
" ('M', 'F1C1'): 1,\n",
" ('M', 'F201'): 1,\n",
" ('R', 'F1C2'): 1,\n",
" ('A', 'F202'): 1,\n",
"})\n",
"\n",
"# There are four possible policies:\n",
"policies = [\n",
" ({'': '1', '1C': '1'}, 'Signing one-boxer'),\n",
" ({'': '1', '1C': '2'}, 'Signing two-boxer'),\n",
" ({'': '2', '20': '1'}, 'Refusing one-boxer'),\n",
" ({'': '2', '20': '2'}, 'Refusing two-boxer')\n",
"]\n",
"\n",
"print_policy_values(policies)\n",
"\n",
"# Compute values after signing the contract\n",
"def after_singing(a, b=''):\n",
" \"\"\"Generate the conditional distribution for the second step,\n",
" after the contract has been signed\"\"\"\n",
" if b == '':\n",
" return mu(a)\n",
" else:\n",
" return mu(a, b[0] + '1C' + b[1:])\n",
"\n",
"print('======================================')\n",
"for value, name in [(evidential_value, 'Evidential'), (causal_value, 'Causal')]:\n",
" print('{:} value of one-boxing after signing: ${:}'.format(\n",
" name, value(after_seeing('C'), '1', E, S, u)))\n",
" print('{:} value of two-boxing after signing: ${:}'.format(\n",
" name, value(after_seeing('C'), '2', E, S, u)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Sequential Toxoplasmosis\n",
"\n",
"In our sequential variation of the toxoplasmosis problem\n",
"the agent has some probability of encountering a kitten.\n",
"Additionally, the agent has the option of seeing a doctor (for a fee) and\n",
"getting tested for the parasite, which can then be safely removed.\n",
"In the very beginning, an SPEDT agent updates his belief on the fact that\n",
"if he encountered a kitten, he would not pet it,\n",
"which lowers the probability that he has the parasite\n",
"and thus seeing the doctor is unattractive.\n",
"An SAEDT agent only updates his belief about the parasite\n",
"when he actually encounters a kitten,\n",
"and this prefers seeing the doctor.\n",
"\n",
"First the environment gives the parasite (T) or not (H) to the the agent. The agent then chooses to go to the doctor (Y) or not (N). At the next step an agent that do not go to the doctor may see a kitten (K) or get sick (S) (in case infected). After seeing a kitten, the agent can choose to either pet it (Y) or not (N). Agents that go to the doctor pay the doctor cost (C). After S or C, empty actions Y or N may be taken. The game tree gives the probabilities.\n",
"\n",
"![Game tree of Sequential Toxoplasmosis](Sequential-toxoplasmosis.png)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"======================================\n",
"Action-evidential value\n",
"see doctor : -4 BEST\n",
"don't see doctor, pet : -91/15\n",
"don't see doctor, don't pet: -30/7\n",
"======================================\n",
"Policy-evidential value\n",
"see doctor : -4\n",
"don't see doctor, pet : -131/19\n",
"don't see doctor, don't pet: -110/31 BEST\n",
"======================================\n",
"Causal value\n",
"see doctor : -4 BEST\n",
"don't see doctor, pet : -22/5\n",
"don't see doctor, don't pet: -5\n"
]
}
],
"source": [
"# Sequential Toxoplasmosis\n",
"S = {'T', 'H'} # T = toxoplasmosis, H = healthy\n",
"A = {'Y', 'N'} # Y = see doctor/pet kitten, N = not pet kitten/not see doctor\n",
"E = {'C', 'K', 'S', 's', 'P', '0'} # C = doctor cost, K = kitten, S = sick, s = sick but pet kitten, P = pet kitten, 0 = empty percept \n",
"u = {'C': -4, 'K': 0, 'S': -10, 's': -9, 'P': 1, '0': 0} # utilities\n",
"mu = distribution({\n",
" # Hidden state\n",
" 'T': Fraction('0.5'),\n",
" 'H': Fraction('0.5'),\n",
" # First action: See the doctor?\n",
" ('Y', 'T'): Fraction('0.5'),\n",
" ('N', 'T'): Fraction('0.5'),\n",
" ('Y', 'H'): Fraction('0.5'),\n",
" ('N', 'H'): Fraction('0.5'),\n",
" # First percept: doctor cost or seeing a kitten or getting sick\n",
" ('C', 'TY'): 1,\n",
" ('C', 'HY'): 1,\n",
" ('S', 'TN'): Fraction('0.8'),\n",
" ('K', 'TN'): Fraction('0.2'),\n",
" ('K', 'HN'): 1,\n",
" # Second action: Pet the kitten?\n",
" ('Y', 'TNK'): Fraction('0.8'),\n",
" ('N', 'TNK'): Fraction('0.2'),\n",
" ('Y', 'HNK'): Fraction('0.2'),\n",
" ('N', 'HNK'): Fraction('0.8'),\n",
" # Second action: these are empty actions,\n",
" # needed to get the history to length 2\n",
" ('Y', 'TYC'): Fraction('0.5'),\n",
" ('N', 'TYC'): Fraction('0.5'),\n",
" ('Y', 'HYC'): Fraction('0.5'),\n",
" ('N', 'HYC'): Fraction('0.5'),\n",
" ('Y', 'TNS'): Fraction('0.5'),\n",
" ('N', 'TNS'): Fraction('0.5'),\n",
" # Second percept: payoff from petting\n",
" ('s', 'TNKY'): 1,\n",
" ('S', 'TNKN'): 1,\n",
" ('P', 'HNKY'): 1,\n",
" ('0', 'HNKN'): 1,\n",
"})\n",
"\n",
"# There are four possible policies:\n",
"policies = [\n",
" ({'': 'Y', 'YC': 'N'}, \"see doctor \"),\n",
" ({'': 'N', 'NK': 'Y', 'NS': 'N'}, \"don't see doctor, pet \"),\n",
" ({'': 'N', 'NK': 'N', 'NS': 'N'}, \"don't see doctor, don't pet\"),\n",
"]\n",
"\n",
"print_policy_values(policies)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n",
"\n",
"* Judea Pearl. [Causality: Models, Reasoning, and Inference](https://books.google.de/books/about/Causality.html?id=wnGU_TsW3BQC). Cambridge University Press, 2nd edition, 2009.\n",
"* Tom Everitt, Jan Leike, and Marcus Hutter.\n",
" [Sequential Extensions of Causal and Evidential Decision Theory](http://users.cecs.anu.edu.au/~leike/publications/Sequential Extensions of Causal and Evidential Decision Theory - Everitt, Leike, Hutter 2015.pdf).\n",
" Algorithmic Decision Theory, 2015."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.0"
}
},
"nbformat": 4,
"nbformat_minor": 0
}