Intelligent Agent Foundations Forumsign up / log in
by Wei Dai 152 days ago | Vladimir Slepnev likes this | link | parent

Thank you for writing this. I’m trying to better understand Paul’s ideas, and it really helps to see an explanation from a different perspective. Also, I was thinking of publicly complaining that I know at least four people who have objections to Paul’s approach that they haven’t published anywhere. Now that’s down to three. :)

I wonder if you can help answer some questions for me. (I’m directing these at Paul too, but since he’s very busy I can’t always expect an answer.)

Why does Paul think that learning needs to be “aligned” as opposed to just well-understood and well-behaved, so that it can be safely used as part of a larger aligned AI design that includes search, logic, etc.? He seems to be trying to design an entire aligned AI out of “learning”, which makes it seem like his approach is an alternative to MIRI’s (Daniel Dewey said this recently at EA Forums for example), while at the same time saying “But we can and should try to do the same for other AI components; I understand MIRI’s agent foundations agenda as (mostly) addressing the alignment of these other elements.” If he actually thinks that his approach and MIRI’s are complements, why didn’t he correct Daniel? I’m pretty confused here.

ETA: I found a partial answer to the above here. To express my understanding of it, Paul is trying to build an aligned AI out of only learning because that seems easier than building a realistic aligned AI and may give him insights into how to do the latter. If he interprets MIRI as doing the analogous thing starting with other AI components (as he seems to according to the quote in the above paragraph), then he surely ought to view the two approaches as complementary, which makes it a bigger puzzle why he didn’t contradict Daniel when Daniel said “if an approach along these lines is successful, it doesn’t seem to me that much room would be left for HRAD to help on the margin”. (Maybe he didn’t read that part, or his interpretation of what MIRI is doing has changed?)

If Paul does not think ALBA is a realistic design of an entire aligned AI (since it doesn’t include search/logic/etc.) what might a realistic design look like, roughly?

Why does Paul think learning “poses much harder safety problems than other AI techniques under discussion”?

Paul is beginning to do empirical work on capability amplification (as he told me recently via email). Do you think that’s a good alternative to trying to make further theoretical progress?



by Paul Christiano 151 days ago | link

Why does Paul think that learning needs to be “aligned” as opposed to just well-understood and well-behaved, so that it can be safely used as part of a larger aligned AI design that includes search, logic, etc.?

I mostly think it should be benign / corrigible / something like that. I think you’d need something like that whether you want to apply learning directly or to apply it as part of a larger system.

If Paul does not think ALBA is a realistic design of an entire aligned AI (since it doesn’t include search/logic/etc.) what might a realistic design look like, roughly?

You can definitely make an entire AI out of learning alone (evolution / model-free RL), and I think that’s currently the single most likely possibility even though it’s not particularly likely.

The alternative design would integrate whatever other useful techniques are turned up by the community, which will depend on what those techniques are. One possibility is search/planning. This can be integrated in a straightforward way into ALBA, I think the main obstacle is security amplification which needs to work for ALBA anyway and is closely related to empirical work on capability amplification. On the logic side it’s harder to say what a useful technique would look like other than “run your agent for a while,” which you can also do with ALBA (though it requires something like these ideas).

which makes it seem like his approach is an alternative to MIRI’s

My hope is to have safe and safely composable versions of each important AI ingredient. I would caricature the implicit MIRI view as “learning will lead to doom, so we need to develop an alternative approach that isn’t doomed,” which is a substitute in the sense that it’s also trying to route around the apparent doomedness of learning but in a quite different way.

reply

by Wei Dai 151 days ago | link

Thanks, so to paraphrase your current position, you think once we have aligned learning it doesn’t seem as hard to integrate other AI components into the design, so aligning learning seems to be the hardest part. MIRI’s work might help with aligning other AI components and integrating them into something like ALBA, but you don’t see that as very hard anyway, so it perhaps has more value as a substitute than a complement. Is that about right?

One possibility is search/planning. This can be integrated in a straightforward way into ALBA

I don’t understand ALBA well enough to easily see extensions to the idea that are obvious to you, and I’m guessing others may be in a similar situation. (I’m guessing Jessica didn’t see it for example, or she wouldn’t have said “ALBA competes with adversaries who use only learning” without noting that there’s a straightforward extension that does more.) Can you write a post about this? (Or someone else please jump in if you do see what the “straightforward way” is.)

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

RSS

Privacy & Terms