Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

Why is it a narrow target? Humans fall into this basin all the time -- loads of human ideologies exist that self-identify as prohuman, but justify atrocities for the sake of the greater good.

AI goals can maybe be broader than human goals or human goals subject to the constraint that lots of people (in an ideology) endorse them at once.

and the best economic models we have of AI R&D automation (e.g. Davidson's model) seem to indicate that it could go either way but that more likely than not we'll get to superintelligence really quickly after full AI R&D automation.

I will look into this. takeoffspeeds.com?

Sorry, I forgot how notifications worked here.

I agree, but there's a way for it to make sense: if the underlying morals/values/etc. are aggregative and consequentialist.

I agree that this could make an AGI with some kind of slightly prohuman goals act this way. It seems to me that being "slightly prohuman" in that way is an unreasonably narrow target, though.

are you sure it is committed to the relationship being linear like that?

It does not specifically say there is a linear relationship, but I think the posited RSI mechanisms are very sensitive to this. Edit: this problem is mentioned explicitly ("More than ever, compute is the lifeblood of AI development, and the ‘bottleneck’ is deciding how to use it."), but it doesn't seem to be directly addressed beyond the idea of building "research taste" into the AI, which seems somewhat tricky because that's quite a long-horizon task with bad feedback signals.

I don't find the takeover part especially plausible. It seems odd for something which cares enough about humans to keep them around like that to also kill the vast majority of us earlier, when there are presumably better ways.

This seems broadly plausible up to there though. One unaddressed thing is that algorithmic progress might be significantly bottlenecked on compute to run experiments, such that adding more researchers roughly as smart as humans doesn't lead to corresponding amounts of progress.