Intelligent Agent Foundations Forumsign up / log in

From discussions I had with Sam, Scott, and Jack:

To solve the problem, it would suffice to find a reflexive domain \(X\) with a retract onto \([0, 1]\).

This is because if you have a reflexive domain \(X\), that is, an \(X\) with a continuous surjective map \(f :: X \rightarrow X^X\), and \(A\) is a retract of \(X\), then there’s also a continuous surjective map \(g :: X \rightarrow A^X\).

Proof: If \(A\) is a retract of \(X\) then we have a retraction \(r::X\rightarrow A\) and a section \(s::A \rightarrow X\) with \(r\circ s = 1_A\). Construct \(g(x) := r \circ f(x)\). To show that \(g\) is a surjection consider an arbitrary \(q \in A^X\). Thus, \(s \circ q :: X \rightarrow X\). Since \(f\) is a surjection there must be some \(x\) with \(f(x) = s \circ q\). It follows that \(g(x) = r \circ f(x) = r \circ s \circ q = q\). Since \(q\) was arbitrary, \(g\) is also a surjection.





NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms