Here's an argument that I think works in the limit where . For very large , each subset of size will occur many times as an . Moreover, for each set of coodinate values , there will be many such that and . Therefore, using the law of large numbers for any subset of size and any set of coordinate values , Bob can infer , i.e. the number of inputs that agree with at the coordinates indexed by for which . Additionally, the random variable contains no other information about that is not contained in this set of statistics.
Given a function , let denote the set of all functions such that for each subset of size and each set of coordinate values , . The argument of the previous paragraph then implies that upon observing Bob can deduce the equivalence class , but no other information about from .
If we have a random variable and we define another random variable as a deterministic function , then it is easy to show that the set of distributions of which maximize the mutual information between and are precisely those for which has a uniform distribution. Applying this to the above setup, we see that the distributions over which maximize the channel capacity are those for which is uniformly distributed over all the values that it can take.
If we suppose in addition that has the maximum entropy distribution subject to this constraint, we find that:
.
Intuitively, this says that the probability of a given function is inversely proportional to the number of other functions that have the same number of ones on every hyperplane defined by fixing a set of some coordinates . This seems to correspond roughly to sensitivity: we intuitively expect there to be a lot of such functions when the number of ones that outputs on most hyperplanes is approximately half of the number of total points on that hyperplane, and saying that a function's output is approximately half ones and half zeros on a given hyperplane is roughly saying that is sensitive to the remaining unspecified coordinates.
It's not obvious to me that the above expression for is feasible to compute in practice, but I think it is fairly naturally interpretable.
The key reason the problem is tractable in the case is that the law of large numbers means the probabilities wash out and we get an essentially deterministic problem. In the case where is finite, this won't happen and in particular I expect you'll run into complications that arise from interaction terms where the method for combining the information from two observations and with is not clean.
I expect you're more likely to end up with a tractable solution if you rephrase the problem statement slightly so that the subsets of inputs over which you're agregating outputs when observed (in the case above, these subsets are the hyperplanes defined by the coordinates ) are disjoint or overlapping. (For example, taking all the to be random but equal would ensure this.) It strikes me that there might be a formulation like this that still captures the key features of the problem you're interested in, but I've not thought about this in detail.
I'm fairly sure you can get a result something like "it's not necessary to put positive probability mass on two different functions that can't be distinguished by observing only s bits", so some functions can get zero probability, e.g. the XOR of all combinations of at least s+1 bits.
edit: The proof is easy. Let , be two such indistinguishable functions that you place positive probability on, F be a random variable for the function, and F' be F but with all probability mass for replaced by . Then . But this means and so You don't lose any channel capacity switching to
Informal Problem Statement
We have an information channel between Alice and Bob. Alice picks a function. Bob gets to see the value of that function at some randomly chosen input values... but doesn't know exactly which randomly chosen input values. He does get to see the randomly chosen values of some of the input variables, but not all of them.
The problem is to find which functions Alice should pick with what frequencies, in order to maximize the channel capacity.
Why Am I Interested In This?
I'm interested in characterizing functions which are "insensitive" to subsets of their input variables, especially in high-dimensional spaces. For instance, xor of a bunch of random bits is maximally sensitive: if we have a 50/50 distribution over any one of the bits but know all the others, then all information about the output is wiped out. On the other end of the spectrum, a majority function of a bunch of random bits is highly insensitive: if we have a 50/50 distribution over, say, 10% of the bits, but know all the others, then in most cases we can correctly guess the function's output.
I have an argument here that the vast majority of functions f:{0,1}n→{0,1} are pretty highly sensitive: as the number of unknown inputs increases, information falls off exponentially quickly. On the other hand, the example of majority functions shows that this is not the case for all functions.
Intuitively, in the problem, Alice needs to mostly pick from "insensitive" functions, since Bob mostly can't distinguish between "sensitive" functions.
... And Why Am I Interested In That?
I expect that natural abstractions have to be insensitive features of the world. After all, different agents don't all have exactly the same input data. So, a feature has to be fairly insensitive in order for different agents to agree on its value.
In fact, we could view the problem statement itself as a very rough way of formulating the coordination problem of language: Alice has to pick some function f which takes in an image and returns 0/1 representing whether the image contains an apple. (The choice of function defines what "apple" means, for our purposes.) Then Alice wants to teach baby Bob what "apple" means. So, there's some random stuff around them, and Alice points at the random stuff and says "apple" for some of it, and says something besides "apple" the rest of the time. Baby Bob is effectively observing the value of the function at some randomly-chosen points, and needs to back out which function Alice intended. And Bob doesn't have perfect access to all the bits Alice is seeing, so the function has to be robust.
Formal Problem Statement
Consider the following information channel between Alice and Bob:
The problem is to compute the distribution over f which achieves the channel capacity, i.e.
argmaxP[f]∑f,YP[f]P[Y|f]lnP[Y|f]∑f′P[Y|f′]P[f′]
Bounty/Prize Info
The problem is to characterize the channel throughput maximizing distribution P[f]. The characterization should make clear the answers to questions like:
All of these should have human-interpretable answers. No credit will be given for e.g. existence and uniqueness alone (the optimization is convex in the happy direction, so that's pretty easy anyway), or a program which would compute P[f] given superexponential compute.
I may give partial or even full awards for variations on the above problem, depending on my own judgement of how useful they are. For instance, any of the following seem reasonable and potentially valuable:
Deadline is end of June. If there are multiple qualifying answers, then I will award prize money based on how useful I judge each answer to be.