It looks like Theorem 1 can be improved slightly, by dropping the "only if" condition on pCD>0. We can then code up something like Kolmogorov complexity by adding a probability 12 transition from every site to our chosen UTM.
If you only want the weaker statement that there is no stationary distribution, it looks like there's a cheaper argument: Since Φ is aperiodic and irreducible the hypothetical stationary distribution π is unique. Φ is closed under the action of Δ, and (2) implies that for any g∈Δ, the map Γg is an automorphism of the Markov chain. If the (infinite) transition matrix is T, then Γg can be considered as a permutation matrix with (abusing notation) Γ−1gTΓg=T. Then TΓgπ=Γgπ and so Γgπ=π by uniqueness. So π is constant on orbits of ΓΔ, which are all countably infinite. Hence π is everywhere 0, a contradiction.
The above still holds if (2) is restricted to only hold for a group G<Δ such that every orbit under ΓG is infinite.
I think the above argument shows why (2) is too strong; we shouldn't expect the world to look the same if you pick a "wrong" (ie. complicated) UTM to start off with. Weakening (2) might mean saying something like asserting only pCD=∑μ(Γ)pΓ(C)Γ(D). To do this, we might define the measures μ and p together (ie. finding a fixed point of a map from pairs (p,μ) to (p′,μ′)). In such a model, μ constraints the transition probabilities, p′ is stationary; it's not clear how one might formalise a derivation of μ′ from p′ but it seems plausible that there is a canonical way to do it.
It looks like Theorem 1 can be improved slightly, by dropping the "only if" condition on pCD>0. We can then code up something like Kolmogorov complexity by adding a probability 12 transition from every site to our chosen UTM.
If you only want the weaker statement that there is no stationary distribution, it looks like there's a cheaper argument: Since Φ is aperiodic and irreducible the hypothetical stationary distribution π is unique. Φ is closed under the action of Δ, and (2) implies that for any g∈Δ, the map Γg is an automorphism of the Markov chain. If the (infinite) transition matrix is T, then Γg can be considered as a permutation matrix with (abusing notation) Γ−1gTΓg=T. Then TΓgπ=Γgπ and so Γgπ=π by uniqueness. So π is constant on orbits of ΓΔ, which are all countably infinite. Hence π is everywhere 0, a contradiction.
The above still holds if (2) is restricted to only hold for a group G<Δ such that every orbit under ΓG is infinite.
I think the above argument shows why (2) is too strong; we shouldn't expect the world to look the same if you pick a "wrong" (ie. complicated) UTM to start off with. Weakening (2) might mean saying something like asserting only pCD=∑μ(Γ)pΓ(C)Γ(D). To do this, we might define the measures μ and p together (ie. finding a fixed point of a map from pairs (p,μ) to (p′,μ′)). In such a model, μ constraints the transition probabilities, p′ is stationary; it's not clear how one might formalise a derivation of μ′ from p′ but it seems plausible that there is a canonical way to do it.