Intelligent Agent Foundations Forumsign up / log in
by Abram Demski 250 days ago | link | parent | on: Musings on Exploration

I’m not convinced exploration doesn’t tile. Exploration steps would not be self-modified away if they’re actually useful/important, and if the agent can recognize this.

In the case of chicken-rule, it’s unclear how to make a decision at all without it. Plus, the exploration never needs to occur – though, subjectively, the agent can’t know that, so that doesn’t really effect the question of tiling. But, the agent could see that removing the chicken step removes the ability of the agent to reason about the consequences of alternate actions.

I think it’s somewhat plausible that you can get something which needs chicken rule as a foundation, but which can decide not to use it in cases like troll bridge, because it is deciding whether to use the chicken rule for sensible reasons (where “sensible” includes the chicken rule itself).

Deciding to use the chicken rule is a transparent-Newcomb-like problem: your behavior in the case that you do see a proof of your own action affects your ability to reason in the case that you don’t.

The same things seem to apply to exploration.

Given our current level of understanding, chicken rule and true exploration seem closely analogous. However, it’s quite plausible that this will stop being the case with a better understanding. In particular, Sam recently pointed out to me that Löb’s theorem doesn’t go through for \(\Box \phi := \mathbb{P}(\phi) \geq 0.95\). We have a decent picture of what tiling looks like in pure logical settings, but that’s shaped very highly by Löb’s theorem. So, tiling considerations for exploration could look very different from those for chicken-rule.

reply

by Abram Demski 266 days ago | link | parent | on: Distributed Cooperation

Cool! I’m happy to see this written up finally. It’s been a big source of intuitions for me, so it’s good to see that the proof checks out.

A possible next step to all this is to try to specify proof-based DT agents which could play this game (or something similar) based on Löbian handshakes. (In fact, part of the original motivation was to try to bring the cooperative-oracle model closer to the Löb-based cooperation you can get in prisoner’s dilemma with visible source code.)

It’s unfortunate that you had to add the pareto-improvement condition to the cell rank. That part seems especially unlikely to drop out of a more general decision theory.

I think I see another serious complication:

Yes, not all points in \(C_{\vec{i}}\) attain the same utility, but for a sufficiently small ϵ, the cell is really small, and for any player, the utility over the cell is well-approximated by the utility attained at the middle point in the cell.

So, for any desired utility-approximation accuracy \(\delta\), you can choose \(\epsilon\) sufficiently small to achieve it. But, a pareto-optima in the set of middle points can be arbitrarily worse for some player than any pareto-optima of the full game; IE, taking the midpoints can hide arbitrarily large pareto improvements.

For example, suppose that \(\delta\)=0.001. A pareto optima of the midpoints might give the utility vector (2, 2, 3) for the three players. There could be another midpoint (100, 100, 2.9999999), very near where the true game contains a point (100, 100, 3).

So, it seems the pareto optimum of the game on midpoints which is found by the process in the post can be arbitrarily sub-optimal for all but one player, with no guarantee that this gets better as \(\epsilon\) shrinks.

reply

by Alex Appel 266 days ago | Abram Demski likes this | link

If you drop the Pareto-improvement condition from the cell rank, and just have “everyone sorts things by their own utility”, then you won’t necessarily get a Pareto-optimal outcome (within the set of cell center-points), but you will at least get a point where there are no strict Pareto improvements (no points that leave everyone better off).

The difference between the two is… let’s say we’ve got a 2-player 2-move game that in utility-space, makes some sort of quadrilateral. If the top and right edges join at 90 degrees, the Pareto-frontier would be the point on the corner, and the set of “no strict Pareto improvements” would be the top and the right edges.

If that corner is obtuse, then both “Pareto frontier” and “no strict Pareto improvements” agree that both line edges are within the set, and if the corner is acute, then both “Pareto frontier” and “no strict Pareto improvements” agree that only the corner is within the set. It actually isn’t much of a difference, it only manifests when the utilities for a player are exactly equal, and is easily changed by a little bit of noise.

The utility-approximation issue you pointed out seems to be pointing towards the impossibility of guaranteeing limiting to a point on the Pareto frontier (when you make the cell size smaller and smaller), precisely because of that “this set is unstable under arbitrarily small noise” issue.

But, the “set of all points that have no strict Pareto improvements by more than \(\delta\) for all players”, ie, the \(\delta\)-fuzzed version of “set of points with no strict pareto improvement”, does seem to be robust against a little bit of noise, and doesn’t require the Pareto-improvement condition on everyone’s ranking of cells.

So I’m thinking that if that’s all we can attain (because of the complication you pointed out), then it lets us drop that inelegant Pareto-improvement condition.

I’ll work on the proof that for sufficiently small cell size \(\epsilon\), you can get an outcome within \(\delta\) of the set of “no strict Pareto improvements available”

Nice job spotting that flaw.

reply


This uses logical inductors of distinctly different strengths. I wonder if there’s some kind of convexity result for logical inductors which can see each other? Suppose traders in \(\mathbb{P}_n\) have access to \(\mathbb{P}'_n\) and vice versa. Or perhaps just assume that the markets cannot be arbitrarily exploited by such traders. Then, are linear combinations also logical inductors?

reply

by Vadim Kosoy 331 days ago | link

This is somewhat related to what I wrote about here. If you consider only what I call convex gamblers/traders and fix some weighting (“prior”) over the gamblers then there is a natural convex set of dominant forecasters (for each history, it is the set of minima of some convex function on \(\Delta\mathcal{O}^\omega\).)

reply


The differences between this and UDT2:

  1. This is something we can define precisely, whereas UDT2 isn’t.
  2. Rather than being totally updateless, this is just mostly updateless, with the parameter \(f\) determining how updateless it is.

I don’t think there’s a problem this gets right which we’d expect UDT2 to get wrong.

If we’re using the version of logical induction where the belief jumps to 100% as soon as something gets proved, then a weighty trader who believes crossing the bridge is good will just get knocked out immediately if the theorem prover starts proving that crossing is bad (which helps that step inside the Löbian proof go through). (I’d be surprised if the analysis turns out much different for the kind of LI which merely rapidly comes to believe things which get proved, but I can see how that distinction might block the proof.) But certainly it would be good to check this more thoroughly.

reply


Looking “at the very beginning” won’t work – the beliefs of the initial state of the logical inductor won’t be good enough to sensibly detect these things and decide what to do about them.

While ignoring the coin is OK as special-case reasoning, I don’t think everything falls nicely into the bucket of “information you want to ignore” vs “information you want to update on”. The more general concept which captures both is to ask “how do I want to react to thin information, in terms of my action?” – which is of course the idea of policy selection.

reply


At present, I think the main problem of logical updatelessness is something like: how can we make a principled trade-off between thinking longer to make a better decision, vs thinking less long so that we exert more logical control on the environment?

For example, in Agent Simulates Predictor, an agent who thinks for a short amount of time and then decides on a policy for how to respond to any conclusions which it comes to after thinking longer can decide “If I think longer, and see a proof that the predictor thinks I two-box, I can invalidate that proof by one-boxing. Adopting this policy makes the predictor less likely to find such a proof.” (I’m speculating; I haven’t actually written up a thing which does this, yet, but I think it would work.) An agent who thinks longer before making a decision can’t see this possibility because it has already proved that the predictor predicts two-boxing, so from the perspective of having thought longer, there doesn’t appear to be a way to invalidate the prediction – being predicted to two-box is just a fact, not a thing the agent has control over.

Similarly, in Prisoner’s Dilemma, an agent who hasn’t thought too long can adopt a strategy of first thinking longer and then doing whatever it predicts the other agent to do. This is a pretty good strategy, because it makes it so that the other agent’s best strategy is to cooperate. However, you have to think for long enough to find this particular strategy, but short enough that the hypotheticals which show that the strategy is a good idea aren’t closed off yet.

So, I think there is less conflict between UDT and bounded reasoning than you are implying. However, it’s far from clear how to negotiate the trade-offs sanely.

(However, in both cases, you still want to spend as long a time thinking as you can afford; it’s just that you want to make the policy decision, about how to use the conclusions of that thinking, as early as they can be made while remaining sensible.)

reply

by Abram Demski 406 days ago | link | parent | on: Predictable Exploration

I think the point I was making here was a bit less clear than I wanted it to be. I was saying that, if you use predictable exploration on actions rather than policies, then you only get to see what happens when you predictably take a certain action. This is good for learning pure equilibria in games, but doesn’t give information which would help the agent reach the right mixed equilibria when randomized actions should be preferred; and indeed, it doesn’t seem like such an agent would reach the right mixed equilibria.

I believe the “predictable exploration on policies” approach solves agent-simulates-predictor just fine, along with other problems (including counterfactual mugging) which require “some degree of updatelessness” without requiring the full reflective stability which we want from updatelessness.

reply

by Abram Demski 409 days ago | link | parent | on: Predictable Exploration

Thinking about this more, I think there’s an important disanalogy between trying to make policy decisions with earlier market states vs smaller proof-searches.

In Agent Simulates Predictor, we can use an earlier market state to decide our policy, because the earlier market state can trust the predictor to make the right predictions, even if the predictor is using a more powerful logic (since logical inductors can learn to boundedly trust more powerful logics).

However, with proof-based DTs, no analogous move is possible.

Consider a version of Agent Simulates Predictor in which Omega searches for a proof that you one-box in PA+Con(PA); if one is found, Omega fills the $1m box. Otherwise, not. Omega has \(T_1\) time to think. The agent has \(T_2\) time to think, \(T_2 >> T_1\). The agent reasons in PA.

If the agent refused to use all its time, and only ran for \(T_0 << T_1\) time, but still had enough time to find interesting proofs, then it could reason as follows: “If I one-box, then there is a short proof that I one-box which Omega can find. So I get $1M.” It may not know if PA+Con(PA) is sound, but that doesn’t matter; the agent just has to ensure that there is a proof which Omega will find. It wouldn’t find any proofs leading to higher utility that this, so it would one-box and get $1M.

Unfortunately, I don’t see any way to harness the shorter proof-search to choose a policy which would get the $1M in this case but choose to think longer in other cases where that’s beneficial.

We might want the agent to reason: “If I stop and one-box right now, Omega will be able to prove that I one-box, and I’ll get $1M. If I wait longer, Omega won’t be able to prove what I do, so I’ll at most be able to get $100. So, I’ll stop now and one-box.” However, this reasoning would have to take place at a proof-length in which several things hold at once:

  • The agent can prove that it’s still “early” enough that its action would be provable to Omega if it acted now.
  • It’s “late” enough that the agent can see that Omega’s predictions are sound (IE, it can check that Omega doesn’t reach false results in the limited time it has). This allows the agent to see that it’ll never get money from both boxes.

It seems very unlikely that there is a proof length where these can both be true, due to bounded Löb.

For logical induction, on the other hand, there’s quite likely to be a window with analogous properties.

reply

by Abram Demski 410 days ago | link | parent | on: Predictable Exploration

So I wound up with “predictable policy selection that forms links to stuff that would be useful to correlate with yourself, and cuts links to stuff that would be detrimental to have correlated with yourself”.

Agreed!

I’m reading this as “You want to make decisions as early as you can, because when you decide one of the things you can do is decide to put the decision off for later; but when you make a decision later, you can’t decide to put it earlier.”

And “logical time” here determines whether others can see your move when they decide to make theirs. You place yourself upstream of more things if you think less before deciding.

This runs directly into problem 1 of “how do you make sure you have good counterfactuals of what would happen if you had a certain pattern of logical links, if you aren’t acting unpredictably”, and maybe some other problems as well, but it feels philosophically appealing.

Here’s where I’m saying “just use the chicken rule again, in this stepped-back reasoning”. It likely re-introduces versions the same problems at the higher level, but perhaps iterating this process as many times as we can afford is in some sense the best we can do.

reply


I agree, my intuition is that LLC asserts that the troll, and even CON(PA), is downstream. And, it seems to get into trouble because it treats it as downstream.

I also suspect that Troll Bridge will end up formally outside the realm where LLC can be justified by the desire to make ratifiability imply CDT=EDT. (I’m working on another post which will go into that more.)

reply

Older

NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms