Intelligent Agent Foundations Forumsign up / log in
Resolving human inconsistency in a simple model
post by Stuart Armstrong 14 days ago | Abram Demski likes this | discuss

A putative new idea for AI control; index here.

This post will present a simple model of an inconsistent human, and ponder how to resolve their inconsistency.

Let \(\bf{H}\) be our agent, in a turn-based world. Let \(R^l\) and \(R^s\) be two simple reward functions at each turn. The reward \(R^l\) is thought of as being a ‘long-term’ reward, while \(R^s\) is a short-term one.


Define \(R^l_t\) as the agent’s \(R^l\) reward at turn \(t\) (and similarly \(R^s_t\) for \(R^s\)). Then, at turn \(t\), the agent \(\bf{H}\) has reward:

  • \(R_t=\sum_{\tau=0}^\infty (\gamma_l)^\tau R^l_{t+\tau}+ (\gamma_s)^\tau R^s_{t+\tau}\),

with constants \(0<\gamma_s < \gamma_l \leq 1\).

Essentially the \(R_s\) and the \(R_l\) have different discount rates, with the reward from \(R_s\) fading much faster than than that of \(R_l\). Therefore the agent will be motivated to get the \(R_s\) reward, but only if they can get this in the short-term. Sex, drugs, food, and many other pleasures often have these features (though they are, of course, much more complicated).

The inconsistency is that the human will continually reset their \(R_t\) at each turn. If there were a single discount rate, that wouldn’t a problem, as that would just scale the whole reward function, and reward functions, like utility functions, give the same decisions when scaled.

But with two discount rates, this is inconsistent. The agent will try and follow \(R_l\) for long-term planning, but this will be disrupted if they encounter an \(R_s\) along the way (and then presumably berate themselves for the lack of self-discipline). This can also be seen as a variant of the “humans are composed of multiple subagents” model, with \(R_s\) corresponding to short-term greedy subagent.

So, we have a simple and not-completely-implausible model of an inconsistent human. The question is, how do we resolve it? None of the obvious approaches are ideal, but it’s worth looking at their features.

Freeze the reward

This is the most obvious approach: simply freeze the reward, so that the reward at time \(t'>t\) is simply the same as reward at time \(t\) (though, in the absence of time-travel, the rewards between \(t\) and \(t'\) are no longer relevant).

This is the obvious approach; in practice, though, it will become equivalent with simply forgetting about \(R_s\) entirely. After a few turns, the exponential shrinkage of the factor \((\gamma_l/\gamma_s)^\tau\) will make \(R_s\)’s typical contribution insignificant. So this approach involves destroying one of \(\bf{H}\)’s sources of reward almost entirely.

Balance the rewards

Another approach would be to balance the rewards, set \(\gamma_l=\gamma_s\), either at the initial value of \(\gamma_l\), the initial value of \(\gamma_s\), or some other value.

This would make the the reward consistent, but has the opposite problem as the previous approach: the long-term importance of \(R_s\) is now massively magnified relative to \(R_l\), so now the long-term plans will prioritise \(R_s\) above \(R_l\) much more than before.

Narrative/frequentist/feature based approach

Since we’re not supposed to do this, let’s anthropomorphise \(\bf{H}\). We can imagine that \(\bf{H}\) has some sort of narrative about their existence – they see themselves as being a certain type of person (possibly mainly connected with \(R_l\)), who has some quirks/indulgences/sins (possibly mainly connected with \(R_s\)).

If they want to extirpate \(R_s\) entirely (“sin”), this is the same as the “Freeze the reward” approach. But they may instead prefer to live their life with roughly the same proportion of \(R_s\) as before (“quirk”), or slightly less (“indulgence”).

In that case, the new \(R\) would be chosen for consequentialist reasons. Not by looking at the individual terms \(R_l\), \(R_s\), \(\gamma_l\), and \(\gamma_s\), but by looking at the consequences of following \(R_t\) under “typical” circumstances, and designing the new reward to replicate this behaviour (and this distribution of reward features), while allowing more efficiency. This is, in itself, an interesting IRL problem. But it seems to make sense for humans, as we define ourselves a lot by what we do and experience, rather than by the pleasures and choices that lead up to those experiences.





NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

What does the Law of Logical
by Alex Appel on Smoking Lesion Steelman III: Revenge of the Tickle... | 0 likes

To quote the straw vulcan:
by Stuart Armstrong on Hyperreal Brouwer | 0 likes

I intend to cross-post often.
by Scott Garrabrant on Should I post technical ideas here or on LessWrong... | 1 like

I think technical research
by Vadim Kosoy on Should I post technical ideas here or on LessWrong... | 2 likes

I am much more likely to miss
by Abram Demski on Should I post technical ideas here or on LessWrong... | 1 like

Note that the problem with
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Typos on page 5: *
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Ah, you're right. So gain
by Abram Demski on Smoking Lesion Steelman | 0 likes

> Do you have ideas for how
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I think I understand what
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>You don’t have to solve
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

Your confusion is because you
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

My confusion is the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

> First of all, it seems to
by Abram Demski on Smoking Lesion Steelman | 0 likes

> figure out what my values
by Vladimir Slepnev on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

RSS

Privacy & Terms