Intelligent Agent Foundations Forumsign up / log in

From my perspective, I don’t think it’s been adequately established that we should prefer updateless CDT to updateless EDT

I agree with this.

It would be nice to have an example which doesn’t arise from an obviously bad agent design, but I don’t have one.

I’d also be interested in finding such a problem.

I am not sure whether your smoking lesion steelman actually makes a decisive case against evidential decision theory. If an agent knows about their utility function on some level, but not on the epistemic level, then this can just as well be made into a counter-example to causal decision theory. For example, consider a decision problem with the following payoff matrix:

Smoke-lover:

  • Smokes:
    • Killed: 10
    • Not killed: -90
  • Doesn’t smoke:
    • Killed: 0
    • Not killed: 0

Non-smoke-lover:

  • Smokes:
    • Killed: -100
    • Not killed: -100
  • Doesn’t smoke:
    • Killed: 0
    • Not killed: 0

For some reason, the agent doesn’t care whether they live or die. Also, let’s say that smoking makes a smoke-lover happy, but afterwards, they get terribly sick and lose 100 utilons. So they would only smoke if they knew they were going to be killed afterwards. The non-smoke-lover doesn’t want to smoke in any case.

Now, smoke-loving evidential decision theorists rightly choose smoking: they know that robots with a non-smoke-loving utility function would never have any reason to smoke, no matter which probabilities they assign. So if they end up smoking, then this means they are certainly smoke-lovers. It follows that they will be killed, and conditional on that state, smoking gives 10 more utility than not smoking.

Causal decision theory, on the other hand, seems to recommend a suboptimal action. Let \(a_1\) be smoking, \(a_2\) not smoking, \(S_1\) being a smoke-lover, and \(S_2\) being a non-smoke-lover. Moreover, say the prior probability \(P(S_1)\) is \(0.5\). Then, for a smoke-loving CDT bot, the expected utility of smoking is just

\(\mathbb{E}[U|a_1]=P(S_1)\cdot U(S_1\wedge a_1)+P(S_2)\cdot U(S_2\wedge a_1)=0.5\cdot 10 + 0.5\cdot (-90) = -40\),

which is less then the certain \(0\) utilons for \(a_2\). Assigning a credence of around \(1\) to \(P(S_1|a_1)\), a smoke-loving EDT bot calculates

\(\mathbb{E}[U|a_1]=P(S_1|a_1)\cdot U(S_1\wedge a_1)+P(S_2|a_1)\cdot U(S_2\wedge a_1)\approx 1 \cdot 10 + 0\cdot (-90) = 10\),

which is higher than the expected utility of \(a_2\).

The reason CDT fails here doesn’t seem to lie in a mistaken causal structure. Also, I’m not sure whether the problem for EDT in the smoking lesion steelman is really that it can’t condition on all its inputs. If EDT can’t condition on something, then EDT doesn’t account for this information, but this doesn’t seem to be a problem per se.

In my opinion, the problem lies in an inconsistency in the expected utility equations. Smoke-loving EDT bots calculate the probability of being a non-smoke-lover, but then the utility they get is actually the one from being a smoke-lover. For this reason, they can get some “back-handed” information about their own utility function from their actions. The agents basically fail to condition two factors of the same product on the same knowledge.

Say we don’t know our own utility function on an epistemic level. Ordinarily, we would calculate the expected utility of an action, both as smoke-lovers and as non-smoke-lovers, as follows:

\(\mathbb{E}[U|a]=P(S_1|a)\cdot \mathbb{E}[U|S_1, a]+P(S_2|a)\cdot \mathbb{E}[U|S_2, a]\),

where, if \(U_{1}\) (\(U_{2}\)) is the utility function of a smoke-lover (non-smoke-lover), \(\mathbb{E}[U|S_i, a]\) is equal to \(\mathbb{E}[U_{i}|a]\). In this case, we don’t get any information about our utility function from our own action, and hence, no Newcomb-like problem arises.

I’m unsure whether there is any causal decision theory derivative that gets my case (or all other possible cases in this setting) right. It seems like as long as the agent isn’t certain to be a smoke-lover from the start, there are still payoffs for which CDT would (wrongly) choose not to smoke.



by Abram Demski 120 days ago | link

Excellent example.

It seems to me, intuitively, that we should be able to get both the CDT feature of not thinking we can control our utility function through our actions and the EDT feature of taking the information into account.

Here’s a somewhat contrived decision theory which I think captures both effects. It only makes sense for binary decisions.

First, for each action you compute the posterior probability of the causal parents for each decision. So, depending on details of the setup, smoking tells you that you’re likely to be a smoke-lover, and refusing to smoke tells you that you’re more likely to be a non-smoke-lover.

Then, for each action, you take the action with best “gain”: the amount better you do in comparison to the other action keeping the parent probabilities the same:

\[\texttt{Gain}(a) = \mathbb{E}(U|a) - \mathbb{E}(U|a, \texttt{do}(\bar a))\]

(\(\mathbb{E}(U|a, \texttt{do}(\bar a))\) stands for the expectation on utility which you get by first Bayes-conditioning on \(a\), then causal-conditioning on its opposite.)

The idea is that you only want to compare each action to the relevant alternative. If you were to smoke, it means that you’re probably a smoker; you will likely be killed, but the relevant alternative is one where you’re also killed. In my scenario, the gain of smoking is +10. On the other hand, if you decide not to smoke, you’re probably not a smoker. That means the relevant alternative is smoking without being killed. In my scenario, the smoke-lover computes the gain of this action as -10. Therefore, the smoke-lover smokes.

(This only really shows the consistency of an equilibrium where the smoke-lover smokes – my argument contains unjustified assumption that smoking is good evidence for being a smoke lover and refusing to smoke is good evidence for not being one, which is only justified in a circular way by the conclusion.)

In your scenario, the smoke-lover computes the gain of smoking at +10, and the gain of not smoking at 0. So, again, the smoke-lover smokes.

The solution seems too ad-hoc to really be right, but, it does appear to capture something about the kind of reasoning required to do well on both problems.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

RSS

Privacy & Terms