Intelligent Agent Foundations Forumsign up / log in
by David Krueger 347 days ago | link | parent

I don’t see this as being the case. As Vadim pointed out, we don’t even know what we mean by “aligned versions” of algos, ATM. So we wouldn’t know if we’re succeeding or failing (until it’s too late and we have a treacherous turn).

It looks to me like Wei Dai shares my views on “safety-performance trade-offs” (grep it here: http://graphitepublications.com/the-beginning-of-the-end-or-the-end-of-beginning-what-happens-when-ai-takes-over/).

I’d paraphrase what he’s said as:

“Orthogonality implies that alignment shouldn’t cost performance, but says nothing about the costs of ‘value loading’ (i.e. teaching an AI human values and verifying its value learning procedure and/or the values it has learned). Furthermore, value loading will probably be costly, because we don’t know how to do it, competitive dynamics make the opportunity cost of working on it large, and we don’t even have clear criteria for success.”

Which I emphatically agree with.



by Jessica Taylor 347 days ago | link

As Vadim pointed out, we don’t even know what we mean by “aligned versions” of algos, ATM. So we wouldn’t know if we’re succeeding or failing

If we fail to make the intuition about aligned versions of algorithms more crisp than it currently is, then it’ll be pretty clear that we failed. It seems reasonable to be skeptical that we can make our intuitions about “aligned versions of algorithms” crisp and then go on to design competitive and provably aligned versions of all AI algorithms in common use. But it does seem like we will know if we succeed at this task, and even before then we’ll have indications of progress such as success/failure at formalizing and solving scalable AI control in successively complex toy environments. (It seems like I have intuitions about what would constitute progress that are hard to convey over text, so I would not be surprised if you aren’t convinced that it’s possible to measure progress).

“Orthogonality implies that alignment shouldn’t cost performance, but says nothing about the costs of ‘value loading’ (i.e. teaching an AI human values and verifying its value learning procedure and/or the values it has learned). Furthermore, value loading will probably be costly, because we don’t know how to do it, competitive dynamics make the opportunity cost of working on it large, and we don’t even have clear criteria for success.”

It seems like “value loading is very hard/costly” has to imply that the proposal in this comment thread is going to be very hard/costly, e.g. because one of Wei Dai’s objections to it proves fatal. But it seems like arguments of the form “human values are complex and hard to formalize” or “humans don’t know what we value” are insufficient to establish this; Wei Dai’s objections in the thread are mostly not about value learning. (sorry if you aren’t arguing “value loading is hard because human values are complex and hard to formalize” and I’m misinterpreting you)

reply

by Paul Christiano 346 days ago | link

As Vadim pointed out, we don’t even know what we mean by “aligned versions” of algos, ATM. So we wouldn’t know if we’re succeeding or failing (until it’s too late and we have a treacherous turn).

Even beyond Jessica’s point (that failure to improve our understanding would constitute an observable failure), I don’t completely buy this.

We are talking about AI safety because there are reasons to think that AI systems will cause a historically unprecedented kind of problem. If we could design systems for which we had no reason to expect them to cause such problems, then we can rest easy.

I don’t think there is some kind of magical and unassailable reason to be suspicious of powerful AI systems, there are just a bunch of particular reasons to be concerned.

Similarly, there is no magical reason to expect a treacherous turn—this is one of the kinds of unusual failures which we have reason to be concerned about. If we built a system for which we had no reason to be concerned, then we shouldn’t be concerned.

reply

by Vadim Kosoy 346 days ago | link

Paul, I’m not sure I understand what you’re saying here. Can you imagine a system “for which we had no reason to expect it to cause such problems” without an underlying mathematical theory that shows why this system is safe?

The reason AI systems will cause a historically unprecedented kind of problem, is that AI systems can outsmart humans and thus create situations that are outside our control, even when we don’t a priori see the precise mechanism by which we will lose control. In order for such a system be safe, we need to know that it will not attempt anything detrimental to us, and we need to know this as an abstraction, i.e without knowing in details what the system will do (because the system is superintelligent so we by definition we cannot guess its actions).

Doesn’t it seem improbable to you that we will have a way of having such knowledge by some other means than the accuracy of mathematical thought?

That is, we can have a situation like “AI running in homomorphic encryption with a quantum-generated key that is somewhere far from the AI’s computer” where it’s reasonable claim that the AI is safe as long as it stays encrypted (even though there is still some risk from being wrong about cryptographic conjectures or the AI exploiting some surprising sort of unknown physics), without using a theory of intelligence at all (beyond the fact that intelligence is a special case of computation), but it seems unlikely that we can have something like this while simultaneously having the AI powerful enough to protect us against other AIs that are malicious.

reply

by Paul Christiano 346 days ago | link

Can you imagine a system “for which we had no reason to expect it to cause such problems” without an underlying mathematical theory that shows why this system is safe?

Yes. For example, suppose we built a system whose behavior was only expected to be intelligent to the extent that it imitated intelligent human behavior—for which there is no other reason to believe that it is intelligent. Depending on the human being imitated, such a system could end up seeming unproblematic even without any new theoretical understanding.

We don’t yet see any way to build such a system, much less to do so in a way that could be competitive with the best RL system that could be designed at a given level of technology. But I can certainly imagine it.

(Obviously I think there is a much larger class of systems that might be non-problematic, though it may depend on what we mean by “underlying mathematical theory.”)

AI systems can outsmart humans and thus create situations that are outside our control, even when we don’t a priori see the precise mechanism by which we will lose control

This doesn’t seem sufficient for trouble. Trouble only occurs when those systems are effectively optimizing for some inhuman goals, including e.g. acquiring and protecting resources.

That is a very special thing for a system to do, above and beyond being able to accomplish tasks that apparently require intelligence. Currently we don’t have any way to accomplish the goals of AI that don’t risk this failure mode, but it’s not obvious that it is necessary.

reply

by Vadim Kosoy 346 days ago | David Krueger likes this | link

Can you imagine a system “for which we had no reason to expect it to cause such problems” without an underlying mathematical theory that shows why this system is safe?

…suppose we built a system whose behavior was only expected to be intelligent to the extent that it imitated intelligent human behavior—for which there is no other reason to believe that it is intelligent.

This doesn’t seem to be a valid example: your system is not superintelligent, it is “merely” human. That is, I can imagine solving AI risk by building whole brain emulations with enormous speed-up and using them to acquire absolute power. However:

  • To the extent this relies on “classical” brain emulation methods, I think this is not what is usually meant by “solving AI alignment.”

  • To the extent this relies on heuristic learning algorithms, I would be worried your algorithm does something subtly wrong in a way that distorts values, although heuristic learning would also invalidate the condition that “there is no other reason to believe that it is intelligent.” (in particular it raises additional concerns such as attacks by malicious superintelligences across the multiverse)

  • As an aside, there is a high-risk zone here where someone untrustworthy can gain this technology and use it to unwittingly create unfriendly AI.

AI systems can outsmart humans and thus create situations that are outside our control, even when we don’t a priori see the precise mechanism by which we will lose control

This doesn’t seem sufficient for trouble. Trouble only occurs when those systems are effectively optimizing for some inhuman goals, including e.g. acquiring and protecting resources.

Well, any AI is effectively optimizing for some goal by definition. How do you know this goal is “human”? In particular, if your AI is supposed to defend us from other AIs, it is very much in the business of acquiring and protecting resources.

reply

by David Krueger 345 days ago | link

I think the core of our differences is that I see minimally constrained, opaque, utility-maximizing agents with good models of the world and access to rich interfaces (sensors and actuators) as extremely likely to be substantially more powerful than what we will be able to build if we start degrading any of these properties.

These properties also seem sufficient for a treacherous turn (in an unaligned AI).

reply

by Paul Christiano 339 days ago | link

I see minimally constrained, opaque, utility-maximizing agents with good models of the world and access to rich interfaces (sensors and actuators) as extremely likely to be substantially more powerful than what we will be able to build if we start degrading any of these properties.

The only point on which there is plausible disagreement is “utility-maximizing agents.” On a narrow reading of “utility-maximizing agents” it is not clear why it would be important to getting more powerful performance.

On a broad reading of “utility-maximizing agents” I agree that powerful systems are utility-maximizing. But if we take a broad reading of this property, I don’t agree with the claim that we will be unable to reliably tell that such agents aren’t dangerous without theoretical progress.

In particular, there is an argument of the form “the prospect of a treacherous turn makes any informal analysis unreliable.” I agree that the prospect of a treacherous turn makes some kinds of informal analysis unreliable. But I think it is completely wrong that it makes all informal analysis unreliable, I think that appropriate informal analysis can be sufficient to rule out the prospect of a treacherous turn. (Most likely an analysis that keeps track of what is being optimized, and rules out the prospect that an indicator was competently optimized to manipulate our understanding of the current situation.)

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

RSS

Privacy & Terms