I was part of a group that ran a PhilPapers-style survey and metasurvey targeting NLP researchers who publish at venues like ACL. Results are here (Tweet-thread version). It didn't target AGI timelines, but had some other questions that could be of interest to people here:

  • NLP is on a path to AGI: 58% agreed that Understanding the potential development of artificial general intelligence (AGI) and the benefits/risks associated with it should be a significant priority for NLP researchers.
    • Related: 57% agreed that Recent developments in large-scale ML modeling (such as in language modeling and reinforcement learning) are significant steps toward the development of AGI.
  • AGI could be revolutionary: 73% agreed that In this century, labor automation caused by advances in AI/ML could plausibly lead to economic restructuring and societal changes on at least the scale of the Industrial Revolution.
  • AGI could be catastrophic: 36% agreed that It is plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war.
    • 46% of women and 53% of URM respondents agreed.
    • The comments suggested that people took a pretty wide range of interpretations to this, including things like OOD robustness failures leading to weapons launches.
  • Few scaling maximalists: 17% agreed that Given resources (i.e., compute and data) that could come to exist this century, scaled-up implementations of established existing techniques will be sufficient to practically solve any important real-world problem or application in NLP.
    • The metasurvey responses predicted that 47% would agree to this, so there are fewer scaling maximalists than people expected there to be.
  • Optimism about ideas from cognitive science: 61% agreed that It is likely that at least one of the five most-cited systems in 2030 will take clear inspiration from specific, non-trivial results from the last 50 years of research into linguistics or cognitive science. 
    • This strikes me as very optimistic, since it's pretty clearly false about the most cited systems today.
  • Optimism about the field: 87% agreed that On net, NLP research continuing into the future will have a positive impact on the world.
    • 32% of respondents who agreed that NLP will have a positive future impact on society also agreed that there is a plausible risk of global catastrophe.
  • Most NLP research is crap: 67% agreed that A majority of the research being published in NLP is of dubious scientific value.
New Comment
4 comments, sorted by Click to highlight new comments since:

This was really interesting, thanks for running and sharing! Overall this was a positive update for me. 

Results are here

I think this just links to PhilPapers not your survey results? 

Thanks! Fixed link.

Super interesting, thanks!

If you were running it again, you might want to think about standardizing the wording of the questions - it varies from 'will / is' to 'is likely' to 'plausible' and this can make it hard to compare between questions. Plausible in particular is quite a fuzzy word, for some it might mean 1% or more, for others it might just mean it's not completely impossible / if a movie had that storyline, they'd be okay with it.

Fair. For better or worse, a lot of this variation came from piloting—we got a lot of nudges from pilot participants to move toward framings that were perceived as controversial or up for debate.