β-redex

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Yes, CPUs leak information: that is the output kind of side-channel, where an attacker can transfer information about the computation into the outside world. That is not the kind I am saying one can rule out with merely diligent pursuit of determinism.

I think you are misunderstanding this part, input side channels absolutely exist as well, Spectre for instance:

On most processors, the speculative execution resulting from a branch misprediction may leave observable side effects that may reveal private data to attackers.

Note that the attacker in this case is the computation that is being sandboxed.

This implies that we could use relatively elementary sandboxing (no clock access, no networking APIs, no randomness, none of these sources of nondeterminism, and that’s about it) to prevent a task-specific AI from learning any particular facts

It's probably very hard to create such a sandbox though, your list is definitely not exhaustive. Modern CPUs leak information like a sieve. (The known ones are mostly patched of course but with this track record plenty more unknown vulnerabilities should exist.)

Maybe if you build the purest lambda calculus interpreter with absolutely no JIT and a deterministic memory allocator you could prove some security properties even when running on a buggy CPU? This seems like a bit of a stretch though. (And maybe while running it like this on a single thread you can prevent the computation from being able to measure time, any current practical AI needs massive parallelism to execute. With that probably all hopes of determinism and preventing timing information from leaking in go out the window.)