Intelligent Agent Foundations Forumhttp://agentfoundations.org/Intelligent Agent Foundations ForumThe Doomsday argument in anthropic decision theoryhttp://agentfoundations.org/item?id=1655Stuart Armstrong

In Anthropic Decision Theory (ADT), behaviours that resemble the Self Sampling Assumption (SSA) derive from average utilitarian preferences (and from certain specific selfish preferences).

However, SSA implies the doomsday argument, and, to date, I hadn’t found a good way to express the doomsday argument within ADT.

This post will remedy that hole, by showing how there is a natural doomsday-like behaviour for average utilitarian agents within ADT.

Comment on Open Problems Regarding Counterfactuals: An Introduction For Beginnershttp://agentfoundations.org/item?id=1658Vadim KosoynilComment on Open Problems Regarding Counterfactuals: An Introduction For Beginnershttp://agentfoundations.org/item?id=1657Vadim KosoynilDelegative Reinforcement Learning with a Merely Sane Advisorhttp://agentfoundations.org/item?id=1656Vadim Kosoy

Previously, we defined a setting called “Delegative Inverse Reinforcement Learning” (DIRL) in which the agent can delegate actions to an “advisor” and the reward is only visible to the advisor as well. We proved a sublinear regret bound (converted to traditional normalization in online learning, the bound is \(O(n^{2/3})\)) for one-shot DIRL (as opposed to standard regret bounds in RL which are only applicable in the episodic setting). However, this required a rather strong assumption about the advisor: in particular, the advisor had to choose the optimal action with maximal likelihood. Here, we consider “Delegative Reinforcement Learning” (DRL), i.e. a similar setting in which the reward is directly observable by the agent. We also restrict our attention to finite MDP environments (we believe these results can be generalized to a much larger class of environments, but not to arbitrary environments). On the other hand, the assumption about the advisor is much weaker: the advisor is only required to avoid catastrophic actions (i.e. actions that lose value to zeroth order in the interest rate) and assign some positive probability to a nearly optimal action. As before, we prove a one-shot regret bound (in traditional normalization, \(O(n^{3/4})\)). Analogously to before, we allow for “corrupt” states in which both the advisor and the reward signal stop being reliable.

Comment on Funding opportunity for AI alignment researchhttp://agentfoundations.org/item?id=1654Vladimir SlepnevnilFunding opportunity for AI alignment researchhttp://agentfoundations.org/item?id=1653Paul ChristianoComment on Smoking Lesion Steelmanhttp://agentfoundations.org/item?id=1652Abram DemskinilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1651Jessica TaylornilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1650Wei DainilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1647Wei DainilComment on Delegative Inverse Reinforcement Learninghttp://agentfoundations.org/item?id=1646Vadim KosoynilComment on Delegative Inverse Reinforcement Learninghttp://agentfoundations.org/item?id=1643Tom EverittnilComment on Smoking Lesion Steelmanhttp://agentfoundations.org/item?id=1642Abram DemskinilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1641Vladimir SlepnevnilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1640Jessica TaylornilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1639Wei DainilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1638Paul ChristianonilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1637Paul ChristianonilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1636Wei DainilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1635Jessica TaylornilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1634Jessica TaylornilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1633Paul ChristianonilComment on Autopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1631Wei DainilComment on Smoking Lesion Steelmanhttp://agentfoundations.org/item?id=1630Abram DemskinilAutopoietic systems and difficulty of AGI alignmenthttp://agentfoundations.org/item?id=1628Jessica Taylor

I have recently come to the opinion that AGI alignment is probably extremely hard. But it’s not clear exactly what AGI or AGI alignment are. And there are some forms of aligment of “AI” systems that are easy. Here I operationalize “AGI” and “AGI alignment” in some different ways and evaluate their difficulties.

Density Zero Explorationhttp://agentfoundations.org/item?id=1627Alex Mennen

The idea here is due to Scott Garrabrant. All I did was write it.

Logical Induction with incomputable sequenceshttp://agentfoundations.org/item?id=1626Alex Mennen

In the definition of a logical inductor, the deductive process is required to be computable. This, of course, does not allow the logical inductor to use randomness, or predict uncomputable sequences. The way traders were defined in the logical induction paper, this was necessary, because the traders were not given access to the output of the deductive process.

Stable Pointers to Value: An Agent Embedded in Its Own Utility Functionhttp://agentfoundations.org/item?id=1622Abram DemskiConditioning on Conditionalshttp://agentfoundations.org/item?id=1624Scott Garrabrant

(From conversations with Sam, Abram, Tsvi, Marcello, and Ashwin Sah) A basic EDT agent starts with a prior, updates on a bunch of observations, and then has an choice between various actions. It conditions on each possible action it could take, and takes the action for which this conditional leads the the highest expected utility. An updateless (but non-policy selection) EDT agent has a problem here. It wants to not update on the observations, but it wants to condition on the fact that its takes a specific action given its observations. It is not obvious what this conditional should look like. In this post, I agrue for a particular way to interpret this conditioning on this conditional (of taking a specific action given a specific observation).

Comment on Cooperative Oracles: Introductionhttp://agentfoundations.org/item?id=1623Scott GarrabrantnilThe Three Levels of Goodhart's Cursehttp://agentfoundations.org/item?id=1621Scott Garrabrant

Goodhart’s curse is a neologism by Eliezer Yudkowsky stating that “neutrally optimizing a proxy measure U of V seeks out upward divergence of U from V.” It is related to many near by concepts (e.g. the tails come apart, winner’s curse, optimizer’s curse, regression to the mean, overfitting, edge instantiation, goodhart’s law). I claim that there are three main mechanisms through which Goodhart’s curse operates.

Comment on Delegative Inverse Reinforcement Learninghttp://agentfoundations.org/item?id=1619Vadim KosoynilComment on Delegative Inverse Reinforcement Learninghttp://agentfoundations.org/item?id=1618Tom EverittnilComment on Delegative Inverse Reinforcement Learninghttp://agentfoundations.org/item?id=1617Vadim KosoynilOn the computational feasibility of forecasting using gamblershttp://agentfoundations.org/item?id=1593Vadim KosoyOpen Problems Regarding Counterfactuals: An Introduction For Beginnershttp://agentfoundations.org/item?id=1591Alex AppelCurrent thoughts on Paul Christano's research agendahttp://agentfoundations.org/item?id=1534Jessica Taylor

This post summarizes my thoughts on Paul Christiano’s agenda in general and ALBA in particular.

"Like this world, but..."http://agentfoundations.org/item?id=1527Stuart Armstrong

A putative new idea for AI control; index here.

Pick a very unsafe goal: \(G=\)“AI, make this world richer and less unequal.” What does this mean as a goal, and can we make it safe?

I’ve started to sketch out how we can codify “human understanding” in terms of human ability to answer questions.

Here I’m investigating the reverse problem, to see whether the same idea can be used to give instructions to an AI.

Improved formalism for corruption in DIRLhttp://agentfoundations.org/item?id=1587Vadim KosoySmoking Lesion Steelmanhttp://agentfoundations.org/item?id=1525Abram Demski

It seems plausible to me that any example I’ve seen so far which seems to require causal/counterfactual reasoning is more properly solved by taking the right updateless perspective, and taking the action or policy which achieves maximum expected utility from that perspective. If this were the right view, then the aim would be to construct something like updateless EDT.

I give a variant of the smoking lesion problem which overcomes an objection to the classic smoking lesion, and which is solved correctly by CDT, but which is not solved by updateless EDT.

Delegative Inverse Reinforcement Learninghttp://agentfoundations.org/item?id=1550Vadim Kosoy

We introduce a reinforcement-like learning setting we call Delegative Inverse Reinforcement Learning (DIRL). In DIRL, the agent can, at any point of time, delegate the choice of action to an “advisor”. The agent knows neither the environment nor the reward function, whereas the advisor knows both. Thus, DIRL can be regarded as a special case of CIRL. A similar setting was studied in Clouse 1997, but as far as we can tell, the relevant literature offers few theoretical results and virtually all researchers focus on the MDP case (please correct me if I’m wrong). On the other hand, we consider general environments (not necessarily MDP or even POMDP) and prove a natural performance guarantee.

The use of an advisor allows us to kill two birds with one stone: learning the reward function and safe exploration (i.e. avoiding both the Scylla of “Bayesian paranoia” and the Charybdis of falling into traps). We prove that, given certain assumption about the advisor, a Bayesian DIRL agent (whose prior is supported on some countable set of hypotheses) is guaranteed to attain most of the value in the slow falling time discount (long-term planning) limit (assuming one of the hypotheses in the prior is true). The assumption about the advisor is quite strong, but the advisor is not required to be fully optimal: a “soft maximizer” satisfies the conditions. Moreover, we allow for the existence of “corrupt states” in which the advisor stops being a relevant signal, thus demonstrating that this approach can deal with wireheading and avoid manipulating the advisor, at least in principle (the assumption about the advisor is still unrealistically strong). Finally we consider advisors that don’t know the environment but have some beliefs about the environment, and show that in this case the agent converges to Bayes-optimality w.r.t. the advisor’s beliefs, which is arguably the best we can expect.

A cheating approach to the tiling agents problemhttp://agentfoundations.org/item?id=1547Vladimir Slepnev

(This post resulted from a conversation with Wei Dai.)

Formalizing the tiling agents problem is very delicate. In this post I’ll show a toy problem and a solution to it, which arguably meets all the desiderata stated before, but only by cheating in a new and unusual way.

Here’s a summary of the toy problem: we ask an agent to solve a difficult math question and also design a successor agent. Then the successor must solve another math question and design its own successor, and so on. The questions get harder each time, so they can’t all be solved in advance, and each of them requires believing in Peano arithmetic (PA). This goes on for a fixed number of rounds, and the final reward is the number of correct answers.

Moreover, we will demand that the agent must handle both subtasks (solving the math question and designing the successor) using the same logic. Finally, we will demand that the agent be able to reproduce itself on each round, not just design a custom-made successor that solves the math question with PA and reproduces itself by quining.

Some Criticisms of the Logical Induction paperhttp://agentfoundations.org/item?id=1544Tarn Somervell FletcherLoebian cooperation in the tiling agents problemhttp://agentfoundations.org/item?id=1532Vladimir Slepnev

The tiling agents problem is about formalizing how AIs can create successor AIs that are at least as smart. Here’s a toy model I came up with, which is similar to Benya’s old model but simpler. A computer program X is asked one of two questions:

  • Would you like some chocolate?

  • Here’s the source code of another program Y. Do you accept it as your successor?

Humans are not agents: short vs long termhttp://agentfoundations.org/item?id=1515Stuart Armstrong

A putative new idea for AI control; index here.

This is an example of humans not being (idealised) agents.

Imagine a human who has a preference to not live beyond a hundred years. However, they want to live to next year, and it’s predictable that every year they are alive, they will have the same desire to survive till the next year.

New circumstances, new values?http://agentfoundations.org/item?id=1506Stuart ArmstrongCooperative Oracles: Stratified Pareto Optima and Almost Stratified Pareto Optimahttp://agentfoundations.org/item?id=1508Scott Garrabrant

In this post, we generalize the notions in Cooperative Oracles: Nonexploited Bargaining to deal with the possibility of introducing extra agents that have no control but have preferences. We further generalize this to infinitely many agents. (Part of the series started here.)

Futarchy, Xrisks, and near misseshttp://agentfoundations.org/item?id=1505Stuart ArmstrongFutarchy Fixhttp://agentfoundations.org/item?id=1493Abram Demski

Robin Hanson’s Futarchy is a proposal to let prediction markets make governmental decisions. We can view an operating Futarchy as an agent, and ask if it is aligned with the interests of its constituents. I am aware of two main failures of alignment: (1) since predicting rare events is rewarded in proportion to their rareness, prediction markets heavily incentivise causing rare events to happen (I’ll call this the entropy-market problem); (2) it seems prediction markets would not be able to assign probability to existential risk, since you can’t collect on bets after everyone’s dead (I’ll call this the existential risk problem). I provide three formulations of (1) and solve two of them, and make some comments on (2). (Thanks to Scott for pointing out the second of these problems to me; I don’t remember who originally told me about the first problem, but also thanks.)

Divergent preferences and meta-preferenceshttp://agentfoundations.org/item?id=1492Stuart Armstrong

A putative new idea for AI control; index here.

In simple graphical form, here is the problem of divergent human preferences: