Intelligent Agent Foundations Forumsign up / log in
by Paul Christiano 453 days ago | link | parent

Do you think that we can consider this as its own problem, of technology outpacing philosophy, which we can evaluate separately of other aspects of AI risk? Or are these problems tied together in a critical way?

In the past people have argued that we needed to resole a wide range of philosophical questions prior to constructing AI because we would need to lock in answers to those questions at that point. I would like to push back against that view, while acknowledging that there may be object-level issues where we pay a costs because we lack philosophical understanding (e.g. how to trade off haste vs. extinction risk, how to deal with the possibility of strange physics, how to bargain effectively…). And I would further acknowledge that AI may have a differential effect on progress in physical technology vs. philosophy.

My current tentative view is that the total object-level cost from philosophical error is modest over the next subjective century. I also believe that you overestimate the differential effects of AI, but that’s also not very firm. If my view changed on these points it might make me more enthusiastic about philosophy or metaphilosophy as research projects.

I have a much stronger belief that we should treat metaphilosophy and AI control as separate problems, and in particular that these concerns about metaphilosophy should not significantly dampen my enthusiasm for my current approach to resolving control problems.

by Vladimir Nesov 453 days ago | Patrick LaVictoire likes this | link

I agree with the sentiment that there are philosophical difficulties that AI needs to take into account, but that very likely take far too long to formulate. Simpler kinds of indirect normativity that involve prediction of uploads allow delaying that work to after AI.

So this issue doesn’t block all actionable work, as its straightforward form would suggest. There might be no need for the activities to be in this order in physical time. Instead it motivates work on the simpler kinds of indirect normativity that would allow such philosophical investigations to take place inside AI’s values. In particular, it motivates figuring out what kind of thing AI’s values are, in sufficient generality so that it would be able to represent the results of unexpected future philosophical progress.


by Wei Dai 451 days ago | link

If we could model humans as having well-defined values but irrational in predictable ways (e.g., due to computational constraints or having a limited repertoire of heuristics), then some variant of CIRL might be sufficient (along with solving certain other technical problems such as corrigibility and preventing bugs) for creating aligned AIs. I was (and still am) worried that some researchers think this is actually true, or by not mentioning further difficulties, give the wrong impression to policymakers and other researchers.

If you are already aware of the philosophical/metaphilosophical problems mentioned here, and have an approach that you think can work despite them, then it’s not my intention to dampen your enthusiasm. We may differ on how much expected value we think your approach can deliver, but I don’t really know another approach that you can more productively spend your time on.






This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes


Privacy & Terms