Intelligent Agent Foundations Forumsign up / log in
Counterfactually uninfluenceable agents
post by Stuart Armstrong 516 days ago | discuss

A putative new idea for AI control; index here.

Techniques used to counter agents taking biased decisions do not produce uninfluenceable agents.

\(\newcommand{\vd}{P}\newcommand{\vdh}{\widehat{\vd}}\)However, using counterfactual tools, we can construct uninfluenceable \(\vdh\) and \(\vd\), starting from biased and influenceable ones.


Why is uninfluenceable necessary? Well, an unbiased agent can still take actions such as ‘randomise their own reward (independent of human choice)’, as long as the choice of their randomisation is unbiased. For instance, let \(\pi_0\) be some default policy, and let the tidying (\(R_0\)) versus cooking (\(R_1)\) agent currently consider both options to be equally likely. So, ultimately, the human will choose one or the other equally: \(\mathbb{E}_\mu^{\pi_0}P(R_0)=\mathbb{E}_\mu^{\pi_0}P(R_1)=1/2\). Then if the agent chooses to randomise its own reward immediately with \(50\%\) probability, this is an unbiased policy. And one the agent may prefer to do, because then it will know its own reward immediately, rather than waiting for the human to decide.

As usual, the notation of this post is used.

The counterfactual distribution

Let \(\mu=(\mathcal{S}, \mathcal{A}, \mathcal{O}, T, O, T_0)\) be the environment, \(P\) a potentially biased and influenceable distribution, and \(\pi_0\) some default policy.

Then define the counterfactual reward learning distribution \(P^{\pi_0}_\mu\) on the reward \(R\), given history \(h_m\), as:

  • \(P^{\pi_0}_\mu(R \mid h_m)=\sum_{s\in\mathcal{S}} \mathbb{E}_{\mu}^{\pi_0}[P(R) \mid s_0=s] \mu(s_0=s \mid h_m)\).

Thus \(P^{\pi_0}_\mu(R \mid h_m)\) is estimated by using \(h_m\) to compute the probability that \(s_0=s\), for each \(s\), and then estimating the (counterfactual) expectation of \(P\) given \(s_0=s\) and the default policy \(\pi_0\).

It’s not hard to see that \(P^{\pi_0}_\mu\) is uninfluenceable: given knowledge of the initial state, it’s just \(\mathbb{E}_{\mu}^{\pi_0}[P(\cdot) \mid s_0=s]\), which is independent of the agent’s actions. The learning process is simply about figuring out what the initial

Dependence on POMDP structure

Note that if two environments are counterfactually equivalent, they can still generate different \(P^{\pi_0}_\mu\) even for the same \(P\) and \(\pi_0\).

This can be seen immediately from examples \(\mu\) and \(\mu''\) from this post, which are counterfactually equivalent.

In that setting, \(a^w\) corresponds to waiting for a human to decide between \(R_0\) and \(R_1\), while \(a^r\) corresponds to the agent randomising their own reward.

Set \(P(R_i \mid s_0a^w s^{wi})=1\) and \(P(R_i \mid s_0a^w s^{ri})=1\) – the final state determines \(R_i\), and let the default policy be \(\pi_0=a^w\), the wait action.

Since \(\mathbb{E}_{\mu}^{\pi_0}[P(R_i)]=0.5\) and \(\mu\) has only one possible state for \(s_0\), \(P_{\mu}^{\pi_0}(R_i)=0.5\) is the only possibility.

In contrast \(\mathbb{E}_{\mu''}^{\pi_0}[P(R_i) | s_0=s_0^i]=1\), so \(P_{\mu''}^{\pi_0}(R_i \mid s_0=s_0^i)=1\).

Thus \(P_{\mu}^{\pi_0}\neq P_{\mu''}^{\pi_0}\).

Interpretation

What could we use for \(\pi_0\)? There are two natural choices: a default where \(\pi_0\) does nothing (equivalent with the agent not being active or on), or \(\pi_0\) being a pure learning policy for \(P\). Since a pure learning process does not have problem with bias or influence (it simply wants to learn), but could be dangerous if unconstrained, having this sort of ‘counterfactual learning’ might be a good idea (though be careful of the incentives that a badly defined pure learning process might have).

For an example: imagine the agent’s correct reward is what was written on a certain paper an hour ago. This is very clearly uninfluenceable: the agent simply needs to learn data that is out in the universe. If instead the agent’s correct reward was what would be written on a certain paper in an hour, then it’s clearly influenceable: the agent can simply write what it wants on that paper.

The counterfactual (for \(\pi_0=\)“do nothing”) is then simply ‘what would have been written on the paper, if the agent had done nothing’. If the agent can figure that out early, then it doesn’t care about the paper or the writing at all, except as far as its counterfactual evidence.

Thus this model is equivalent with the old stratified agents.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms