Intelligent Agent Foundations Forumsign up / log in
by Alex Appel 6 days ago | Abram Demski likes this | link | parent | on: Distributed Cooperation

If you drop the Pareto-improvement condition from the cell rank, and just have “everyone sorts things by their own utility”, then you won’t necessarily get a Pareto-optimal outcome (within the set of cell center-points), but you will at least get a point where there are no strict Pareto improvements (no points that leave everyone better off).

The difference between the two is… let’s say we’ve got a 2-player 2-move game that in utility-space, makes some sort of quadrilateral. If the top and right edges join at 90 degrees, the Pareto-frontier would be the point on the corner, and the set of “no strict Pareto improvements” would be the top and the right edges.

If that corner is obtuse, then both “Pareto frontier” and “no strict Pareto improvements” agree that both line edges are within the set, and if the corner is acute, then both “Pareto frontier” and “no strict Pareto improvements” agree that only the corner is within the set. It actually isn’t much of a difference, it only manifests when the utilities for a player are exactly equal, and is easily changed by a little bit of noise.

The utility-approximation issue you pointed out seems to be pointing towards the impossibility of guaranteeing limiting to a point on the Pareto frontier (when you make the cell size smaller and smaller), precisely because of that “this set is unstable under arbitrarily small noise” issue.

But, the “set of all points that have no strict Pareto improvements by more than \(\delta\) for all players”, ie, the \(\delta\)-fuzzed version of “set of points with no strict pareto improvement”, does seem to be robust against a little bit of noise, and doesn’t require the Pareto-improvement condition on everyone’s ranking of cells.

So I’m thinking that if that’s all we can attain (because of the complication you pointed out), then it lets us drop that inelegant Pareto-improvement condition.

I’ll work on the proof that for sufficiently small cell size \(\epsilon\), you can get an outcome within \(\delta\) of the set of “no strict Pareto improvements available”

Nice job spotting that flaw.


Intermediate update:

The handwavy argument about how you’d get propositional inconsistency in the limit of imposing the constraint of “the string cannot contain \(a\wedge b\wedge c...\to\neg\phi\) and \(a\) and \(b\) and… and \(\phi\)

is less clear than I thought. The problem is that, while the prior may learn that that constraint applies as it updates on more sentences, that particular constraint can get you into situations where adding either \(\phi\) or \(\neg\phi\) leads to a violation of the constraint.

So, running the prior far enough forward leads to the probability distribution being nearly certain that, while that particular constraint applied in the past, it will stop applying at some point in the future by vetoing both possible extensions of a string of sentences, and then less-constrained conditions will apply from that point forward.

On one hand, if you don’t have the computational resources to enforce full propositional consistency, it’s expected that most of the worlds you generate will be propositionally inconsistent, and midway through generating them you’ll realize that some of them are indeed propositionally inconsistent.

On the other hand, we want to be able to believe that constraints capable of painting themselves into a corner will apply to reality forevermore.

I’ll think about this a bit more. One possible line of attack is having \(\mathbb{P}(\phi)\) and \(\mathbb{P}(\neg\phi)\) not add up to one, because it’s possible that the sentence generating process will just stop cold before one of the two shows up, and renormalizing them to 1. But I’d have to check if it’s still possible to \(\varepsilon\)-approximate the distribution if we introduce this renormalization, and to be honest, I wouldn’t be surprised if there was a more elegant way around this.

EDIT: yes it’s still possible to \(\varepsilon\)-approximate the distribution in known time if you have \(\mathbb{P}(\phi)\) refer to \(\frac{probability to encounter \phi first}{1-probability to halt first}\), although the bounds are really loose. This is because if most of the execution paths involve halting before the sentence is sampled, \(\varepsilon\)-error in the probability of sampling \(\phi\) first will get blown up by the small denominator.

Will type up the proof later, but it basically proceeds by looking at the probability mass associated with “sample the trivial constraint that accepts everything, and sample it again on each successive round”, because this slice of probability mass has a near-guarantee of hitting \(\phi\), and then showing that even this tiny slice has substantially more probability mass than the cumulative probability of ever sampling a really rare sentence or not hitting any of \(\phi\), \(\neg\phi\), or the string terminating.


A summary that might be informative to other people: Where does the \(\omega(\frac{2}{3})\) requirement on the growth rate of the “rationality parameter” \(\beta\) come from?

Well, the expected loss of the agent comes from two sources. Making a suboptimal choice on its own, and incurring a loss from consulting a not-fully-rational advisor. The policy of the agent is basically “defer to the advisor when the expected loss over all time of acting (relative to the optimal move by an agent who knew the true environment) is too high”. Too high, in this case, cashes out as “higher than \(\beta(t)^{-1}t^{-1/x}\)”, where t is the time discount parameter and \(\beta\) is the level-of-rationality parameter. Note that as the operator gets more rational, the agent gets less reluctant about deferring. Also note that t is reversed from what you might think, high values of t mean that the agent has a very distant planning horizon, low values mean the agent is more present-oriented.

On most rounds, the agent acts on its own, so the expected all-time loss on a single round from taking suboptimal choices is on the order of \(\beta(t)^{-1}t^{-1/x}\), and also we’re summing up over about t rounds (technically exponential discount, but they’re similar enough). So the loss from acting on its own ends up being about \(\beta(t)^{-1}t^{(x-1)/x}\).

On the other hand, delegation will happen on at most ~\(t^{2/x}\) rounds, with a loss of \(\beta(t)^{-1}\) value, so the loss from delegation ends up being around \(\beta(t)^{-1}t^{2/x}\).

Setting these two losses equal to each other/minimizing the exponent on the t when they are smooshed together gets you x=3. And then \(\beta(t)\) must grow asymptotically faster than \(t^{2/3}\) to have the loss shrink to 0. So that’s basically where the 2/3 comes from, it comes from setting the delegation threshold to equalize long-term losses from the AI acting on its own, and the human picking bad choices, as the time horizon t goes to infinity.


I don’t believe that \(x_{:n}^{!k}\) was defined anywhere, but we “use the definition” in the proof of Lemma 1.

As far as I can tell, it’s a set of (j,y) pairs, where j is the index of a hypothesis, and y is an infinite history string, rather like the set \(h^{!k}\).

How do the definitions of \(h^{!k}\) and \(x^{!k}_{:n}\) differ?


by Vadim Kosoy 87 days ago | link

Hi Alex!

The definition of \(h^{!k}\) makes sense for any \(h\), that is, the superscript \(!k\) in this context is a mapping from finite histories to sets of pairs as you said. In the line in question we just apply this mapping to \(x_{:n}\) where \(x\) is a bound variable coming from the expected value.

I hope this helps?


by Alex Appel 150 days ago | Abram Demski likes this | link | parent | on: Predictable Exploration

Hm, I got the same result from a different direction.

(probably very confused/not-even-wrong thoughts ahead)

It’s possible to view a policy of the form “I’ll compute X and respond based on what X outputs” as… tying your output to X, in a sense. Logical link formation, if you will.

And policies of the form “I’ll compute X and respond in a way that makes that output of X impossible/improbable” (can’t always do this) correspond to logical link cutting.

And with this, we see what the chicken rule in MUDT/exploration in LIDT is doing. It’s systematically cutting all the logical links it can, and going “well, if the statement remains correlated with me despite me trying my best to shake off anything that predicts me too well, I guess I”cause" it."

But some potentially-useful links were cut by this process, such as “having short abstract reasoning available that lets others predict what you will do” (a partner in a prisoner’s dilemma, the troll in troll bridge, etc..)

At the same time, some links should be cut by a policy that diagonalizes against predictions/calls upon an unpredictable process (anything that can be used to predict your behavior in matching pennies, evading Death when Death can’t crack your random number generator, etc…)

So I wound up with “predictable policy selection that forms links to stuff that would be useful to correlate with yourself, and cuts links to stuff that would be detrimental to have correlated with yourself”.

Predictably choosing an easy-to-predict policy is easy-to-predict, predictably choosing a hard-to-predict policy is hard-to-predict.

This runs directly into problem 1 of “how do you make sure you have good counterfactuals of what would happen if you had a certain pattern of logical links, if you aren’t acting unpredictably”, and maybe some other problems as well, but it feels philosophically appealing.


by Abram Demski 150 days ago | link

So I wound up with “predictable policy selection that forms links to stuff that would be useful to correlate with yourself, and cuts links to stuff that would be detrimental to have correlated with yourself”.


I’m reading this as “You want to make decisions as early as you can, because when you decide one of the things you can do is decide to put the decision off for later; but when you make a decision later, you can’t decide to put it earlier.”

And “logical time” here determines whether others can see your move when they decide to make theirs. You place yourself upstream of more things if you think less before deciding.

This runs directly into problem 1 of “how do you make sure you have good counterfactuals of what would happen if you had a certain pattern of logical links, if you aren’t acting unpredictably”, and maybe some other problems as well, but it feels philosophically appealing.

Here’s where I’m saying “just use the chicken rule again, in this stepped-back reasoning”. It likely re-introduces versions the same problems at the higher level, but perhaps iterating this process as many times as we can afford is in some sense the best we can do.


by Abram Demski 149 days ago | link

Thinking about this more, I think there’s an important disanalogy between trying to make policy decisions with earlier market states vs smaller proof-searches.

In Agent Simulates Predictor, we can use an earlier market state to decide our policy, because the earlier market state can trust the predictor to make the right predictions, even if the predictor is using a more powerful logic (since logical inductors can learn to boundedly trust more powerful logics).

However, with proof-based DTs, no analogous move is possible.

Consider a version of Agent Simulates Predictor in which Omega searches for a proof that you one-box in PA+Con(PA); if one is found, Omega fills the $1m box. Otherwise, not. Omega has \(T_1\) time to think. The agent has \(T_2\) time to think, \(T_2 >> T_1\). The agent reasons in PA.

If the agent refused to use all its time, and only ran for \(T_0 << T_1\) time, but still had enough time to find interesting proofs, then it could reason as follows: “If I one-box, then there is a short proof that I one-box which Omega can find. So I get $1M.” It may not know if PA+Con(PA) is sound, but that doesn’t matter; the agent just has to ensure that there is a proof which Omega will find. It wouldn’t find any proofs leading to higher utility that this, so it would one-box and get $1M.

Unfortunately, I don’t see any way to harness the shorter proof-search to choose a policy which would get the $1M in this case but choose to think longer in other cases where that’s beneficial.

We might want the agent to reason: “If I stop and one-box right now, Omega will be able to prove that I one-box, and I’ll get $1M. If I wait longer, Omega won’t be able to prove what I do, so I’ll at most be able to get $100. So, I’ll stop now and one-box.” However, this reasoning would have to take place at a proof-length in which several things hold at once:

  • The agent can prove that it’s still “early” enough that its action would be provable to Omega if it acted now.
  • It’s “late” enough that the agent can see that Omega’s predictions are sound (IE, it can check that Omega doesn’t reach false results in the limited time it has). This allows the agent to see that it’ll never get money from both boxes.

It seems very unlikely that there is a proof length where these can both be true, due to bounded Löb.

For logical induction, on the other hand, there’s quite likely to be a window with analogous properties.


What does the Law of Logical Causality say about CON(PA) in Sam’s probabilistic version of the troll bridge?

My intuition is that in that case, the agent would think CON(PA) would be causally downstream of itself, because the distribution of actions conditional on CON(PA) and \(\neg\)CON(PA) are different.

Can we come up with any example where the agent thinking it can control CON(PA) (or any other thing that enables accurate predictions of its actions) actually gets it into trouble?


by Abram Demski 155 days ago | link

I agree, my intuition is that LLC asserts that the troll, and even CON(PA), is downstream. And, it seems to get into trouble because it treats it as downstream.

I also suspect that Troll Bridge will end up formally outside the realm where LLC can be justified by the desire to make ratifiability imply CDT=EDT. (I’m working on another post which will go into that more.)


It looks legitimate, actually.

Remember, \(f\) is set-valued, so if \(r_{i-1}=0\), \(f(r)=[0,1]\). In all other cases, \(f(r)=\frac{r_{i-1}}{2}\). \(f\) is a nonempty convex set-valued function, so all that’s left is to show the closed graph property. If the limiting value of \(r_{i-1}\) is something other than 0, the closed graph property holds, and if the limiting value of \(r_{i-1}\) is 0, the closed graph property holds because \(0\in[0,1]\).


by Vadim Kosoy 227 days ago | link

Hi Alex!

I agree that the multimap you described is Kakutani and gives the correct fair set, but in the OP it says that if \(r_{i-1}=0\) then \(f(r)=r_i\), not \(f(r)=[0,1]\). Maybe I am missing something about the notation?


Quick question: It is possible to drive the probability of x down arbitrarily far by finding a bunch of proofs of the form “x implies y” where y is a theorem. But the exact same argument applies to not x.

If the theorem-prover always finds a proof of the form “not x implies y” immediately afterwards, the probability wouldn’t converge, but it would fluctuate within a certain range, which looks good enough.

What, if any, conditions need to be imposed on the theorem prover to confine the probabilities assigned to an unprovable statement to a range that is narrower than (0, 1)?


I don’t know, that line of reasoning that U()=10 seems like a pretty clear consequence of PA+Sound(PA)+A()=a, and the lack of a counterfactual for “X is false” doesn’t violate any of my intuitions. It’s just reasoning backwards from “The agent takes action a” to the mathematical state of affairs that must have produced it (there is a short proof of X).

On second thought, the thing that broke the original trolljecture was reasoning backwards from “I take action a” to the mathematical state of affairs that produced it. Making inferences about the mathematical state of affairs in your counterfactuals using knowledge of your own decision procedure does seem to be a failure mode at first glance.

Maybe use the counterfactual of “find-and-replace all instances of X’s source code in the universe program U with action a, and evaluate”? But that wouldn’t work for different algorithms that depend on checking the same math facts. There needs to be some way to go from “X takes action A” to “closely related algorithm Y takes action B”. But that’s just inferring mathematical statements from the combination of actions and knowing X’s decision rule.

I’ll stick with the trolljecture as the current best candidate for “objective” counterfactuals, because reasoning backwards from actions and decision rules a short way into math facts seems needed to handle “logically related” algorithms, and this counterexample looks intuitively correct.


Interestingly enough, the approximate coherence condition doesn’t hold when there is a short proof of φ from ψ, but it does hold when there is a sufficiently short proof of ψ from φ. (Very roughly, the valuation of (φ∧¬ψ) is negligible, while the valuation of (φ∧ψ) is approximately equal to the valuation of φ.) So coherence only works one way.

On a mostly unrelated note, this sort of reasoning doesn’t seem to link well with the strategy of “add important mathematical facts that you discover into your pool of starting axioms to speed up the process of deriving things.” While a set of axioms, and a set of axioms plus a bunch of utility tool theorems (Godel’s 2nd incompleteness, Lob’s theorem, the fundamental theorem of calculus, etc..) may “span” the same space of theorems, the second one is much easier to quickly derive new interesting statements from, and seems to be how humans think about math. The inconsistency involved in getting 10 utility on the 5-and-10 problem is much easier to spot if the second incompleteness theorem is already in your pool of sentences to apply to a new problem.

As with the Russel’s Paradox example, in practice, counterfactuals seem to be mind-dependent, and vary depending on which of the many different lines of reasoning a mind heads down. If you define a subjective distance relative to a particular search algorithm, this objective valuation would just use the shortest possible subjective distance to the statement and a contradiction. The valuation using the human distance function of a statement in naive set theory would be high, because we were slow to notice the contradiction. So that aspect of counterfactual reasoning seems easy to capture.

Has anyone checked what this does on ASP problems?


by Patrick LaVictoire 916 days ago | link

ASP has only been formalized in ways that don’t translate to modal universes, so it’s only the analogous extensions that could apply there.

In the formulation here, it’s not enough that the agent one-box; it must do so provably in less than \(N\) steps. I think that both the valuations \(\nu_{\textsf{ZFC+Sound(ZFC)}+A()=1}(U()=1M)\) and \(\nu_{\textsf{ZFC+Sound(ZFC)}+A()=1}(U()=0)\) will probably be about \(1/2\), but I’m not sure.







If you drop the
by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes


Privacy & Terms