Intelligent Agent Foundations Forumsign up / log in

From discussions I had with Sam, Scott, and Jack:

To solve the problem, it would suffice to find a reflexive domain \(X\) with a retract onto \([0, 1]\).

This is because if you have a reflexive domain \(X\), that is, an \(X\) with a continuous surjective map \(f :: X \rightarrow X^X\), and \(A\) is a retract of \(X\), then there’s also a continuous surjective map \(g :: X \rightarrow A^X\).

Proof: If \(A\) is a retract of \(X\) then we have a retraction \(r::X\rightarrow A\) and a section \(s::A \rightarrow X\) with \(r\circ s = 1_A\). Construct \(g(x) := r \circ f(x)\). To show that \(g\) is a surjection consider an arbitrary \(q \in A^X\). Thus, \(s \circ q :: X \rightarrow X\). Since \(f\) is a surjection there must be some \(x\) with \(f(x) = s \circ q\). It follows that \(g(x) = r \circ f(x) = r \circ s \circ q = q\). Since \(q\) was arbitrary, \(g\) is also a surjection.





Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like

Two minor comments. First,
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes

A few comments. Traps are
by Vadim Kosoy on Musings on Exploration | 1 like

I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes

Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes

If you drop the
by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like


Privacy & Terms