Intelligent Agent Foundations Forumsign up / log in
Logical Updatelessness as a Subagent Alignment Problem
post by Scott Garrabrant 28 days ago | discuss

(Cross-posted an Less Wrong)

There is a cluster of problems in understanding naturalized agency which I call Subagent Alignment. It refers to the way in which a limited agent might direct a more powerful agent to do its bidding. The limit agent starts out in control because it gets to design the more powerful agent, but it is not easy to retain that power. In most applications the limited agent is a human and the powerful agent is an AI.

Example of problems in this subfield include corrigibility, value learning, and informed oversight.

Logical Updatelessness is one of the central open problems in decision theory. An updateless agent is one that does not update on its observations, but instead chooses what action it wants itself to output upon being input those observations. It can use this to get counterfactually mugged (which is a good thing). This is basically the only way we know how to make a reflectively stable agent.

The problem is that we do not know how to translate updatelessness into a logical setting. In the toy models in which we can define updatelessness, the agent is logically omniscient and just makes empirical observations. Thus, starts out just as powerful as it will ever be. In fact, we can view it as though the only agent is the agent existing at the beginning of time. That agent chooses a policy, a function from inputs to outputs, and all the future agents can just blindly follow that policy without thinking.

In the logical setting, the future agents are smarter than the past agents. They learn more logic in addition to making empirical observations. The agent at the beginning time does not have the power to contain a policy from all the logical observations to actions. This agent does not choose a simple policy to follow in the future, but instead chooses a complicated program to run in the future.

I claim that logical updatelessness is hard while empirical updatelessness is easy because an empirically updateless agent is deferring control to a simple policy that it understands, while a logically updateless agent has to defer control to a more powerful agent that it does not understand. Thus, logical updatelessness can be viewed as a subagent alignment problem.

If we think about this just in terms of reflective stability, a reflectively stable agent has to never have a conflict between what its future self wants and what its past self wants. You can only achieve this by having a single agent at the center of control across all time. This center of control has to be simple enough that the agent can follow it in the early time steps, but the agent has to keep following it at the late time steps too. Thus, the agent in the late time steps has to be a powerful agent optimizing on behalf of this simple center of control.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

I think the point I was
by Abram Demski on Predictable Exploration | 0 likes

(also x-posted from
by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes

(x-posted from Arbital ==>
by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes

>If the other players can see
by Stuart Armstrong on Predictable Exploration | 0 likes

Thinking about this more, I
by Abram Demski on Predictable Exploration | 0 likes

> So I wound up with
by Abram Demski on Predictable Exploration | 0 likes

RSS

Privacy & Terms