Intelligent Agent Foundations Forumsign up / log in
ALBA requires incremental design of good long-term memory systems
discussion post by Jessica Taylor 749 days ago | Ryan Carey likes this | 1 comment

Summary: ALBA is an approach to training aligned AGI. One problem with implementing ALBA is that some actions that must be overseen involve storing information that will be used in future episodes. To oversee these actions, it is necessary for the operators to know which information is worth storing.


(thanks to Ryan for helping me work through some of these ideas)

Recommended before reading this post: Not just learning

ALBA requires the operator to oversee a learning system’s performance in each episode (say, by assigning a score). Usually, the operator would like information to be stored between episodes; for example, they might want to store a photo from the robot’s sensors, or they might want to run a large computation that takes more than one episode to perform. Under ALBA, the operator must be able to provide good feedback about how useful a certain piece of information is to store.

In the case of physical observations (e.g. photos), it usually seems fine to just store everything. But in the case of “logical” information such as the results of computations, it isn’t possible to store everything, since that would require running all computations. So the operators will need some idea of which logical information is most useful to store (i.e. which computations are most useful to run now and cache the result of).

How much is long-term memory tied to cognitive architecture?

One would hope that this is possible for the operators to do without essentially already understanding how to program an aligned AGI. Perhaps humans can make pretty good guesses about which information is useful to store without knowing much about the underlying learning system.

In the worst case, though, the optimal logical information for an AGI system to store in the long term strongly depends on its cognitive architecture. For example, if two different humans are studying for the same test, they will probably read different material and do different cognitive work in the process of studying; if one human somehow had access to the other’s cognitive work, it probably wouldn’t be that useful since it would consist of memories in a “different mental language”. At the same time, it seems like humans are able to come up with pretty good collective memories over the long time (e.g. in the form of books), although books are substantially less efficient than personal notes because they have to be understood by more than one human.

Under uncertainty about the right cognitive architecture for the AGI system to use over the long term, we could just store the information that all architectures think is useful. If we are uncertain between \(k\) different architectures, then this multiplies the cost of long-term thinking and memory by at most \(k\). Hopefully, if our uncertainty about the right architecture has structure, we should be able to do much less work while still mostly satisfying each cognitive architecture.

This problem can be solved incrementally

At the start, the operators must decide which information fairly weak learning systems should store. Later in the bootstrapping process, the operators (assisted by the weak learning system they just trained) must decide which information stronger learning systems should store. Thus, the operators don’t need to initially know which information is good to store for strong learning systems. This is definitely a good reason for optimism: optimistic about bootstrapping in general should translate to optimism about having the bootstrapping process decide which information is good to store.

Still, this seems to me like one of the “most magic” parts of ALBA. If ALBA is hiding most of the alignment problem somewhere, “figuring out which information to store between episodes” is a pretty good guess.

Solving this problem in full generality is hard

To solve this problem in full generality, we would have to have a good guess ahead of time (before we can get much feedback) about what kind of computations we will want to run in the future. Intuitively, this seems at least as hard as “having a good physical/logical prior”. If we’re using an indirect method like ALBA, then we’re probably already pessimistic about specifying good physical/logical priors in any way other than deferring to assisted humans. In this case we should not expect to solve the full problem before actually having access to AGI (although of course we could still make progress in understanding aspects of it ahead of time).

What about “learning to learn”?

In some cases, it is possible to get feedback about which information should be kept around. For example, in one paper, researchers trained a deep learning system to make updates to a neural network’s parameters that improve performance. This is a special case of “figuring out which information should be stored between episodes”.

I don’t think “learning to learn” changes the nature of the problem. When training the parameter-updater, it is only possible to use information from a limited horizon (say, \(k\)) while preserving good statistical guarantees. So we might as well group episodes into blocks of size \(k\), and then consider a modified learning problem where each episode is a block of the original problem. At this point, the humans must oversee \(1/k\) as many episodes as they did originally, but \(k\) times as much work is being done in each episode.

Reducing the number of episodes seems to make the problem somewhat easier (since there are fewer episodes to store information between), and “learning to learn” provides an intuition about why it might be possible to get good performance in these larger episodes.

I expect it will be slightly harder to oversee few small episodes, versus lots of large episodes. As a general rule, to get good performance on \(T\) episodes, it is necessary for the operator to oversee \(O(T^{2/3})\) episodes (see regret bounds for partial monitoring; I’m setting the amount of oversight so that oversight work and other forms of regret are of similar magnitudes). Let’s assume that oversight time per episode should be roughly proportional to the length of the episode. So if we block episodes into groups of size \(k\), the oversight time is now \(O(k \cdot (T/k)^{2/3}) = O(k^{1/3} T^{2/3})\). Therefore, this blocking multiplies the amount of oversight work required by \(k^{1/3}\).

(note that when introducing a blocking structure, it’s important not to unintentionally introduce incentives to manipulate future episodes in the same block. For example, if each original episode consists of answering a single question, and the answer to one episode can affect which question is asked in the next one, then the learning system might give manipulative answers that cause easier questions to be asked in the future. To avert this incentive, it is necessary to fix the questions ahead of time, or to use a different training procedure to explicitly minimize regret compared with alternative policies running on the same list of questions).



by Paul Christiano 547 days ago | link

Note that we don’t need to compete with any kind of “best possible” memory system. We to:

  1. Compete with the benchmark AI.
  2. Remain corrigible.

For [a] the plan is to co-opt whatever mechanism the benchmark AI uses to decide what information to store. This involves some subtleties, since we need to construct a benign analog, but it doesn’t require anything like knowing what computations we’d want to run in the future or which cognitive architecture we will adopt in the future.

For [b] the demands on memory seem quite weak. Probably we could get by with no memory at all.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms