I'm not sure I've fully followed, but I'm suspicious that you seem to be getting something for nothing in your shift from a type of uncertainty that we don't know how to handle to a type we do.
It seems to me like you must be making an implicit assumption somewhere. My guess is that this is where you used to pair with . If you'd instead chosen as the matching then you'd have uncertainty between whether should be or . My guess is that generically this gives different recommendations from your approach.
Seems to me like there are a bunch of challenges. For example you need extra structure on your space to add things or tell what's small; and you really want to keep track of long-term impact not just at the next time-step. Particularly the long-term one seems thorny (for low-impact in general, not just for this).
Nevertheless I think this idea looks promising enough to explore further, would also like to hear David's reasons.
For #5, OK, there's something to this. But:
I like your framing for #1.
Thanks for the write-up, this is helpful for me (Owen).
My initial takes on the five steps of the argument as presented, in approximately decreasing order of how much I am on board:
I think the double decrease effect kicks in with uncertainty, but not with confident expectation of a smaller network.