Intelligent Agent Foundations Forumsign up / log in
Stable agent, subagent-unstable
discussion post by Stuart Armstrong 18 days ago | discuss

A putative new idea for AI control; index here.

A reflectively consistent agent is one that is ok with creating copies with the same motivation as itself, or continuing to have the same motivation under self-modification.

A reflectively stable agent is one that would only create agents with the same motivation as itself (unless rewarded for doing otherwise), and would actively want to preserve its motivation under self-modification.

Here is a design that is reflectively stable for its own self-modification, but not even reflectively consistent for copies and subagents.

Subtracting the value function

Let \(h_t\) be the history of the agent’s interaction with the environment up to time \(t\).

Let the agent \(A\) have policy \(\pi_A\). Let \(V(R,\pi, h_t)\) be the expected value of (future) reward \(R\) under policy \(\pi\), given history \(h_t\). Let \(\pi_R\) the policy that maximises this value the most.

Initially, the agent has no reward. Then, at time \(t\), the AI gets reward function \(R\), and gets the penalty:

  • \(-c V(R, \pi_A, h_t)\).

For \(c<1\), this change of reward is reflexively stable (the agent will choose \(\pi_A=\pi_R\) to get maximal expected reward \((1-c)V(R,\pi_R,h_t)\)), for \(c=1\), it is reflexively consistent (any policy \(\pi_A\) gives the same expected reward of \(0\)), and for \(c>1\), it is neither.

But instead, imagine the agent gets the penalty:

  • \(-c V(R, \pi_R, h_t)\).

In this case, the agent is reflexively stable for all values of \(c\)! That’s because the penalty term is independent of its policy choice, and it will therefore choose \(\pi_A=\pi_R\) after \(t\) to maximise the reward it does get from \(R\).

No subagent stability.

Using \(\pi_R\) in the reward may make the agent itself reflexively stable, but if \(c>1\), it will not copy the same goal into its subagent.

This is easy to see, because the agent’s expected reward is \((1-c)V(R, \pi_R, h_t)\), so its best policy is to minimise \(V(R, \pi_R, h_t)\). It will do this by acting so that \(h_t\) is one the minimises that value.

For instance, it might create subagents whose task is to actively restrain the agent after \(t\). Then it will struggle against them after \(t\) (because it now wants to maximise the reward \(R\)), but will set them up to be as powerful as possible before that, because before \(t\), it wants its own future struggles to fail.

Clearly, though this is technically reflectively stable, this is not a good sort of stability.

Not ‘real’ stability

It should be noted that the agent’s “stability” is an artefact of the fact that \(\pi_R\) is defined to be “the best policy that the agent can follow to maximise \(R\)”.

“The agent” is not an ontologically fundamental object, so this stability is only as good as our definition of the agent (just as we could make the whole setup subagent-stable, if only we could define subagents - which we can’t really do).



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

RSS

Privacy & Terms