All of osmarks's Comments + Replies

Why is it a narrow target? Humans fall into this basin all the time -- loads of human ideologies exist that self-identify as prohuman, but justify atrocities for the sake of the greater good.

AI goals can maybe be broader than human goals or human goals subject to the constraint that lots of people (in an ideology) endorse them at once.

and the best economic models we have of AI R&D automation (e.g. Davidson's model) seem to indicate that it could go either way but that more likely than not we'll get to superintelligence really quickly after full AI R&D automation.

I will look into this. takeoffspeeds.com?

2Daniel Kokotajlo
Yep, takeoffspeeds.com, though actually IMO there are better models now that aren't public and aren't as polished/complete. (By Tom+Davidson, and by my team)

Sorry, I forgot how notifications worked here.

I agree, but there's a way for it to make sense: if the underlying morals/values/etc. are aggregative and consequentialist.

I agree that this could make an AGI with some kind of slightly prohuman goals act this way. It seems to me that being "slightly prohuman" in that way is an unreasonably narrow target, though.

are you sure it is committed to the relationship being linear like that?

It does not specifically say there is a linear relationship, but I think the posited RSI mechanisms are very sensitive to ... (read more)

2Daniel Kokotajlo
Why is it a narrow target? Humans fall into this basin all the time -- loads of human ideologies exist that self-identify as prohuman, but justify atrocities for the sake of the greater good. As for RSI mechanisms: I disagree, I think the relationship is massively sublinear but nevertheless that RSI will happen, and the best economic models we have of AI R&D automation (e.g. Davidson's model) seem to indicate that it could go either way but that more likely than not we'll get to superintelligence really quickly after full AI R&D automation.

I don't find the takeover part especially plausible. It seems odd for something which cares enough about humans to keep them around like that to also kill the vast majority of us earlier, when there are presumably better ways.

This seems broadly plausible up to there though. One unaddressed thing is that algorithmic progress might be significantly bottlenecked on compute to run experiments, such that adding more researchers roughly as smart as humans doesn't lead to corresponding amounts of progress.

I agree, but there's a way for it to make sense: if the underlying morals/values/etc. are aggregative and consequentialist. Pretty much anything can be justified for the sake of pretty much any distant-future Greater Good; if the misaligned AI e.g. wants humans to live, but thinks that the transhuman future they'd build on their own is slightly worse than the 'managed utopia' it could build if it were in charge, and it multiplies the numbers, it can easily find that killing most people and then having billions of years of managed utopia is better overall t... (read more)