This is a special post for quick takes by Daniel Kokotajlo. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
11 comments, sorted by Click to highlight new comments since:

I found this article helpful and depressing. Kudos to TracingWoodgrains for detailed, thorough investigation.

Technologies I take for granted now but remember thinking were exciting and cool when they came out

  • Smart phones
  • Google Maps / Google Earth
  • Video calls
  • Facebook
  • DeepDream (whoa! This is like drug hallucinations... I wonder if they share a similar underlying mechanism? This is evidence that ANNs are more similar to brains than I thought!)
  • AlphaGo
  • AlphaStar (Whoa! AI can handle hidden information!)
  • OpenAI Five (Whoa! AI can work on a team!)
  • GPT-2 (Whoa! AI can write coherent, stylistically appropriate sentences about novel topics like unicorns in the andes!)
  • GPT-3

I'm sure there are a bunch more I'm missing, please comment and add some!

Some of my own:

  • SSDs
  • laptops
  • CDs
  • digital cameras
  • modems
  • genome sequencing
  • automatic transmissions for cars that perform better than a moderately skilled human using a manual transmission can
  • cheap shipping
  • solar panels with reasonable power generation
  • breathable wrinkle free fabrics that you can put in the washing machine
  • bamboo textiles
  • good virtual keyboards for phones
  • scissor switches
  • USB
  • GPS

Oh yeah, cheap shipping! I grew up in a military family, all around the world, and I remember thinking it was so cool that my parents could go on "ebay" and order things and then they would be shipped to us! And then now look where we are -- groceries delivered in ten minutes! Almost everything I buy, I buy online!

Anthropic says:

More realistic evaluations are difficult without risking actual sabotage. One difficulty of running realistic evaluations is presenting realistic attack surfaces. Perhaps the hardest and most important attack surface to accurately model is the suspiciousness of expert humans in high-stakes decisions. To test a given set of realworld anti-sabotage mitigations, it’s tempting to introduce some sort of “penetration testing” or red-teaming, in which models are instructed to attempt to harmlessly but verifiably sabotage real decisions in a reversible manner. However, while such red-teaming might usefully encourage decision-makers to consider the possibility of sabotage, actually allowing an agent to interfere with the operation of important organizations risks introducing real vulnerabilities as an accidental side effect.

I don't buy this argument. Seems like a very solveable problem, e.g. log everything your red-team agent does and automatically revert it after ten minutes, or wipe your whole network and reboot from a save. Idk. I'm not a cybersecurity expert but this feels like a solveable problem.

Low importance aside:

Seems slightly unproductive and unnecessarily connotation inducing to say "Anthropic says:". Maybe "The Sabotage eval paper (from Anthropic) says:". Like this was just written by various people on the Anthropic alignment science team and by no means should discussion sections of papers be interpreted to speak for the company overall. Obviously not very important.

OK, good point, I should have said "From Anthropic" or something like that.

We discuss similar issues in meta-level adversarial evaluations of oversight.

The most simple approach is just to have a human in the loop monitoring every action the AI does in combination with targeting a proxy failure rather than a real failure. This probably can reduce risk to a point where this is a small fraction of overall risk.

Human monitoring might be too expensive, in which case we'll have to go with sandboxing and pure proxy.

Rereading this classic by Ajeya Cotra: https://www.planned-obsolescence.org/july-2022-training-game-report/

I feel like this is an example of a piece that is clear, well-argued, important, etc. but which doesn't seem to have been widely read and responded to. I'd appreciate pointers to articles/posts/papers that explicitly (or, failing that, implicitly) respond to Ajeya's training game report. Maybe the 'AI Optimists?' 

Searching for equilibria can be infohazardous. You might not like the one you find first, but you might end up sticking with it (or worse, deviating from it and being punished). This is because which equilbrium gets played by other people depends (causally or, in some cases, acausally) not just on what equilibrium you play but even on which equilibria you think about. For reasons having to do with schelling points. A strategy that sometimes works to avoid these hazards is to impose constraints on which equilibria you think about, or at any rate to perform a search through equilibria-space that is guided in some manner so as to be unlikely to find equilibria you won't like. For example, here is one such strategy: Start with a proposal that is great for you and would make you very happy. Then, think of the ways in which this proposal is unlikely to be accepted by other people, and modify it slightly to make it more acceptable to them while keeping it pretty good for you. Repeat until you get something they'll probably accept.

I think it is useful to distinguish between two dimensions of competitiveness: Resource-competitiveness and date-competitiveness. We can imagine a world in which AI safety is date-competitive with unsafe AI systems but not resource-competitive, i.e. the insights and techniques that allow us to build unsafe AI systems also allow us to build equally powerful safe AI systems, but it costs a lot more. We can imagine a world in which AI safety is resource-competitive but not date-competitive, i.e. for a few months it is possible to make unsafe powerful AI systems but no one knows how to make a safe version, and then finally people figure out how to make a similarly-powerful safe version and moreover it costs about the same.