Intelligent Agent Foundations Forumsign up / log in
by Paul Christiano 309 days ago | Ryan Carey and Jessica Taylor like this | link | parent

I don’t think “honesty” is what we are looking for.

We have a system which has successfully predicted “what I would say if asked” (for example) and now we want a system that will continue to do that. “What I would say” can be defined precisely in terms of particular physical observations (it’s the number provided as input to a particular program) while conditioning only on pseudorandom facts about the world (e.g. conditioning on my computer’s RNG, which we use to determine what queries get sent to the human). We really just want a system that will continue to make accurate predictions under the “common sense” understanding of reality (rather than e.g. believing we are in a simulation or some other malign skeptical hypothesis).

I don’t think that going through a model of cooperativeness with humans is likely to be the easiest way to specify this. I think one key observation to leverage, when lower-bounding the density, is that the agent is already using the desired concept instrumentally. For example, if it is malevolent, it is still reasoning about what the correct prediction would be in order to increase its influence. In some sense the “honest” agent is just a subset of the malicious reasoning, stopping at the honest goal rather than continuing to backwards chain. If we could pull out this instrumental concept, then it wouldn’t necessarily be the right thing, but at least the failures wouldn’t be malign.

If you have a model with 1 degree of freedom per step of computation,, then it seems like the “honest” agent is necessarily simpler simpler, since we can slice out the parts of computation that are operating on this instrumental goal. It might be useful to try and formalize this argument as a warmup.

(Note that e.g. a fully-connected neural net has this property; so while it’s kind of a silly example, it’s not totally out there.)

Incidentally, this style of argument also seems needed to address the malignity of the universal prior / logical inductor, at least if you want to run a theoretically convincing argument. I expect the same conceptual machinery will be used in both cases (though it may turn out that one is possible and the other is impossible). So I think this question is needed both for my agenda and MIRI’s agent foundations agenda, and advocate bumping it up in priority.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

RSS

Privacy & Terms