From my perspective, the point of reasoning about complete theories isn't that we actually care about them, it's that "what does this halting oracle output?" might be a useful analogy for "what does this long-running computation output?" I still think it is/was a useful analogy, though the time eventually came to move on to smaller and better things.
Here, I present a failed attempt to build an updateless decision theory out of universal inductors. It fails because it is is mistaking updatelessness about which logical theory it is in for true logical updatelessness about computations. I will use Tsvi's notation.
Fix a UGI, (Pn). Fix a sequence of utility functions (Un:2ω→R), which assigns a utility to all propositionally consistent worlds (represented by infinite bit strings). We assume that Un(W) is computable function of n and the first k(n) bits for some computable function k. In the simplest example, Un is just equal to a single bit in the string.
We define a sequence of agents (An) which output a single bit 1 or 0. These agents will be broken into two pieces, a deductive process, which outputs a bunch of logical facts, and a decision process, which chooses a policy in the form of a function from the possible outputs of the deductive process to {0,1}.
Let Pn denote the set of policies that the decision process can output. There is a computable partition of worlds into sets of worlds where each policy is output, Sn:2ω→Pn. For each p∈Pn, we can compute the expectation, E(Un(W)|Sn(W)=p), where W is sampled according to Pn. The decision process outputs the policy p which maximizes E(Un(W)|Sn(W)=p), and the agent An outputs the result of applying that policy to the output of the deductive process.
There are actually many things wrong with the above proposal, and many similar proposals that fail in similar or different ways. However, I want to focus on the one problem that proposals like this have in common:
Universal Induction is a model for uncertainty about what theory/model you are in; it is not a model for uncertainty about the output of computations.
It is easiest to see why this is a problem using the counterfactual mugging problem. We would like to use a universal inductor to be uncertain about a digit of π, and thus reason about the world in which it went another way. The problem is that a powerful universal inductor has the digits of π in its probabilities, even if it does not know that it is in PA. This is because the Kolomorogov complexity of a infinite string of the digits of π is very low, while the Kolomorogov complexity of a string that looks like π for a very long time, and then changes is high. We do not have to direct our UGI at PA for it to have good beliefs about late bits in a string that starts out looking like π.
I will use the phrase "logical updatelessness" to refer updatelessness about computations. I think updatelessness about the logical system is mostly a distraction from the more important concept of logical updatelessness. (Similarly, I believe that early work in logical uncertainty about distributions over complete theories was mostly a distraction from the later work that focused on uncertainty about computations.)