Review

This is a brief, stylized recounting of a few conversations I had at some point last year with people from the non-academic AI safety community:[1] 

Me: you guys should write up your work properly and try to publish it in ML venues.

Them: well that seems like a lot of work and we don't need to do that because we can just talk to each other and all the people I want to talk to are already working with me.

Me: What about the people who you don't know who could contribute to this area and might even have valuable expertise?  You could have way more leverage if you can reach those people.  Also, there is increasing interest from the machine learning community in safety and alignment... because of progress in capabilities people are really starting to consider these topics and risks much more seriously.

Them: okay, fair point, but we don't know how to write ML papers.

Me: well, it seems like maybe you should learn or hire people to help you with that then, because it seems like a really big priority and you're leaving lots of value on the table.

Them: hmm, maybe... but the fact is, none of us have the time and energy and bandwidth and motivation to do that; we are all too busy with other things and nobody wants to.

Me: ah, I see!  It's an incentive problem!  So I guess your funding needs to be conditional on you producing legible outputs.

Me, reflecting afterwards: hmm... Cynically,[2] not publishing is a really good way to create a moat around your research... People who want to work on that area have to come talk to you, and you can be a gatekeeper.  And you don't have to worry about somebody with more skills and experience coming along and trashing your work or out-competing you and rendering it obsolete...

EtA: In comments, people have described adhering to academic standards of presentation and rigor as "jumping through hoops".  There is an element of that, but this really misses the value that these standards have to the academic community.  This is a longer discussion, though...

  1. ^

    There are sort of 3 AI safety communities in my account:
    1) people in academia
    2) people at industry labs who are building big models
    3) the rest (alignment forum/less wrong and EA being big components). I'm not sure where to classify new orgs like Conjecture and Redwood, but for the moment I put them here.

    I'm referring to the last of these in this case.

  2. ^

    I'm not accusing anyone of having bad motivations; I think it is almost always valuable to consider both people's concious motivations and their incentives (which may be subconscious (EtA: or indirect) drivers of their behavior).

New Comment
36 comments, sorted by Click to highlight new comments since:
[-]Viliam1214

ah, I see!  It's an incentive problem!  So I guess your funding needs to be conditional on you producing legible outputs.

This rubs me the wrong way. Of course, you can make anyone do X, if you make their funding conditional on X. But whether you should do that, that depends on how sure you are that X is more valuable than whatever is the alternative.

There are already thousands of people out there whose funding is conditional on them producing legible outputs. Why is that not enough? What will change if we increase that number by a dozen?

Q: "Why is that not enough?"
A: Because they are not being funded to produce the right kinds of outputs.

Needless to say, writing papers and getting them into ML conferences is time-consuming. There's an opportunity cost. Is it worth doing despite the opportunity cost? I presume that, for the particular people you talked to, and the particular projects they were doing, your judgment was “Yes the opportunity cost was well worth paying”. And I’m in no position to disagree—I don’t know the details. But I wouldn't want to make any blanket statements. If someone says the opportunity cost is not worth it for them, I see that as a claim that a priori might be true or false. Your post seems to imply that almost everyone is making an error in the same direction, and therefore funders should put their thumb on the scale. That’s at least not obvious to me.

You seem to be suggesting that people in academia don’t read blog posts, and that blog posts are generically harder to read than papers. Both seem obviously false to me; for example, many peer-reviewed ML papers come along with blog posts, and the blog posts are intended to be the more widely accessible of the two.

Of course, blog posts can be unreadable too. Generally, I think that it's healthy for people to write BOTH (A) stuff with lots of jargon & technical details that conveys information well to people already in the know AND (B) highly-accessible stuff intended for a broader audience. (That’s what I try to do, at least.) I think it’s true and uncontroversial to say that blog posts are great for (B). I also happen to think that blog posts are great for (A).

Anyway, I think this OP isn’t particularly addressed at me (I have nothing I want to share that would fit in at an ML conference, as opposed to neuroscience), but if anyone cares I’d be happy to discuss in detail why I haven’t written any peer-reviewed papers related to my AI alignment work since I started it full-time 2 years ago, and have no immediate plans to, and what I’ve been doing instead to mitigate any downsides of that decision. It’s not even a close call; this decision seems very overdetermined from my perspective.

If someone says the opportunity cost is not worth it for them, I see that as a claim that a priori might be true or false. Your post seems to imply that almost everyone is making an error in the same direction, and therefore funders should put their thumb on the scale. That’s at least not obvious to me.


I do think this is the wrong calculation, and the error caused by it is widely shared and pushes in the same direction. 

Publication is a public good, where most of the benefit accrues to others / the public. Obviously costs to individuals are higher than the benefits to them in far more cases than where costs to individuals are higher than the summed benefits to others. And evaluating good accrued to the researchers is the wrong thing to check - if our goal is aligned AI, the question should be the benefit to the field.

If we compare

  • (A) “actual progress”, versus
  • (B) “legible signs of progress”,

it seems obvious to me that everyone has an incentive to underinvest in (A) relative to (B). You get grants & jobs & status from (B), not (A), right? And papers can be in (B) without being minimally or not at all in (A).

In academia, people talk all the time about how people are optimizing their publication record to the detriment of field-advancement, e.g. making results sound misleadingly original and important, chasing things that are hot, splitting results into unnecessarily many papers, etc. Right?

Hmm, I’m trying to guess where you’re coming from. Maybe you’d propose a model with

  • (C) “figuring things out”
  • (D) “communicating those things to the x-risk community”
  • (E) “communicating those things to the ML community”

And the idea is that someone whose funding & status is coming entirely from the x-risk community has no incentive to do (E). Is that it?

If so, I strongly endorse that (E) is worth doing, to a nonzero extent. But it’s not obvious to me that the AGI x-risk community is collectively underinvesting in (E); I think I lean towards “overinvesting” on the margin. (I repeat: on the margin!! Zero investment is too little!)

I think that everyone who is both motivated by x-risk and employed by a CS department—e.g. CHAI, the OP (Krueger), etc.—is doing (E) intensively all the time, and will keep doing so perpetually. We don’t have to worry about (E) going to zero. If other people do (D) to the exclusion of (E), I think good ideas will trickle out through the above-mentioned people, and/or through ML people getting interested and gradually learning enough jargon to read the (D) stuff.

I think that, for some people / projects, getting their results into an ML conference would cut down the amount of (C) & (D) that gets done by a factor of 2, or even more, or much more when it affects the choice of what to work on in the first place. And I think that’s a very bad tradeoff.

I would say a similar thing about any technical field. I want climate modelers to spend most of their time figuring out how to do better climate modeling, in collaboration with other climate modelers, using climate modeling jargon. Obviously, to some extent, there has to be accessible communication to a wider audience of stakeholders about what’s going on in climate modeling. But that’s going to happen anyway—plenty of people like writing popular books and blog posts and stuff, and kudos to those people, and likewise some stakeholders outside the field will naturally invest in learning climate modeling jargon and injecting themselves into that conversation, and kudos to those people too. Groupthink is bad, interdisciplinarity is good, and so on, but lots of important technical work just can’t be easily communicated to people with no subject-matter expertise or investment in the subfield, and it’s really really bad if that kind of work falls by the wayside.

Then separately, if people are underinvesting in (E), I think it’s non-obvious (and often false) that the solution to that problem is to try to get papers through peer review and into ML conferences.

  • For one thing, if an x-risk-concerned person can write an ML paper, they can equally well write a blog post that avoids x-risk jargon (and maybe even replaces it with ML jargon), and I think that would have a comparable chance of getting widely read by ML people and successfully communicating substantive ideas to them. It’s not like every paper in an ML conference gets widely read and cited anyway, right? But blog posts take absurdly less time.
  • For another thing, if we assume for the sake of argument that the “gravitas” / style / cite-ability of academic papers is a feature not a bug, then people can get those by putting a paper onto ML arxiv, and that takes much less time than going through peer-review. I think in some cases that’s a great choice.

To respond briefly, I think that people underinvest in (D), and write sub-par forum posts rather than aim for the degree of clarity that would allow them to do (E) at far less marginal cost. I agree that people overinvest in (B)[1], but also think that it's very easy to tell yourself your work is "actual progress" when you're doing work that, if submitted to peer-reviewed outlets, would be quickly demolished as duplicative of work you're unaware of, or incompletely thought-out in other ways.

I also worry that many people have never written a peer reviewed paper, and aren't thinking through the tradeoff, they just never develop the necessary skills, and can't ever move to more academic outlets[2]. I say all of this as someone who routinely writes for both peer-reviewed outlets and for the various forums - my thinking needs to be clearer for reviewed work, and I agree that the extraneous costs are high, but I think that the tradeoff in terms of getting feedback and providing something for others to build on, especially others outside of the narrow EA-motivated community, is often worthwhile.

Edit to add: But yes, I unambiguously endorse starting with writing Arxiv papers, as they get a lot of the benefit without needing to deal with the costs of review. They do fail to get as much feedback, which is a downside. (It's also relatively easy to put something on Arxiv and submit to a journal for feedback, and decide whether to finish the process after review.)

  1. ^

     Though much of that work - reviews, restatements, etc. can be valuable despite that.

  2. ^

    To be fair, I may be underestimating the costs of learning the skills for those who haven't done this - but I do think there's tons of peer mentorship within EA which can work to greatly reduce those costs, if people are willing to use those resources.

I think that the tradeoff in terms of getting feedback and providing something for others to build on, especially others outside of the narrow EA-motivated community, is often worthwhile.

 

This should be obvious for everyone! As an outside observer and huge sympathizer, it is super-frustrating how siloed the broad EA/rational/AI-alignment/adjacent community is --this specific issue with publication is only one of the consequences. Many of "you people" only interacting between "yourselves" (and I'm not referring to you, Davids), very often even socially. I mean, you guys are trying to do the most good possible, so help others use and leverage on your work! And don't waste time reinventing what is already common or, at least, what already exists outside. More mixing would also help prevent Leverage-style failures and probably improve what from the outside seems like a very weird and unhealthy "bay area social dynamics" (as put by Kaj here).

Thanks, agreed. And as an aside, I don't think it's entirely coincidental that neither of the people who agree with you are in the Bay.

My point is not specific to machine learning. I'm not as familiar with other academic communities, but I think most of the time it would probably be worth engaging with them if there is somewhere where your work could fit.

Speaking for myself…

I think I do a lot of “engaging with neuroscientists” despite not publishing peer-reviewed neuroscience papers:

  • I write lots of blog posts intended to be read by neuroscientists, i.e. I will attempt to engage with background assumptions that neuroscientists are likely to have, not assume non-neuroscience background knowledge or jargon, etc.
    • [To be clear, I also write even more blog posts that are not in that category.]
  • When one of my blog posts specifically discusses some neuroscientist’s work, I’ll sometimes cold-email them and ask for pre-publication feedback.
  • When I have questions about a neuroscientist’s paper, I’ll sometimes cold-email them to try to start a chat.
  • There are a handful of neuroscientists whose work is unusually relevant to AGI capabilities and/or safety (in my opinion), and I’m kinda always on the lookout for excuses to get in touch with them, with some amount of success I think.
  • I got interviewed on a popular podcast in AI-adjacent neuroscience, and I have a 1-hour zoom talk that I give whenever anyone invites me.

Between those things, plus word-of-mouth, I feel pretty confident that WAY more neuroscientists are familiar with my detailed ideas than is typical given that I’ve been in the field full-time for only 2 years (and spend barely half my time on neuroscience anyway), and also WAY more than the counterfactual where I spend the same amount of time on outreach / communication but do so mainly via publishing peer-reviewed neuroscience papers. Like, sometimes I’ll read a peer-reviewed paper in detail, and talk to the author, and the author remarks that I might be the first person to have ever read it in detail apart from their own close collaborators and the referees.

You're very unusually proactive, and I think the median member of the community would be far better served if they were more engaged the way you are. Doing that without traditional peer reviewed work is fine, but unusual, and in many ways is more difficult than peer-reviewed publication. And for early career researchers, I think it's hard to be taken seriously without some more legible record - you have a PhD, but many others don't.

Cynically,[2] not publishing is a really good way to create a moat around your research... People who want to work on that area have to come talk to you, and you can be a gatekeeper. And you don't have to worry about somebody with more skills and experience coming along and trashing your work or out-competing you and rendering it obsolete...

I don't understand this part. They don't have to come talk to you, they just have to follow a link to Alignment Forum to read the research. And aren't forum posts easier to read than papers on arXiv? I feel like if the moat exists anywhere it is around academic journals which often do not make their papers freely accessible, use more cryptic writing norms and insist on using PDF which are not as user-friendly to read as webpages.

To be sure, I'm not disagreeing with your overall point. It would be great if at least the best research from Alignment Forum/LessWrong were on arXiv or in journals, and I think you're right we're leaving value on the table there. I have wondered about if someone just made it their job to do these conversions/submissions for top alignment research on the forums, because there are probably economies of scale for one person doing this vs. every researcher interrupting their work flow to learn how to jump through the hoops of paper conversion/submission.

  1. A lot of work just isn't made publicly available
  2. When it is, it's often in the form of ~100 page google docs
  3. Academics have a number of good reasons to ignore things that don't meet academic standards or rigor and presentation

In my experience people also often know their blog posts aren't very good.

I think your cynical take is pretty wrong, for the reasons Evan described. I'd add that because of the way academic prestige works, you are vulnerable to having your ideas stolen if you just write them up on LessWrong and don't publish them. You'll definitely get fewer citations, less recognition, etc.

I think people's stated motivations are the real motivations: Jumping through hoops to format your work for academia has opportunity costs and they don't judge those costs to be worth it.

My point (see footnote) is that motivations are complex.  I do not believe "the real motivations" is a very useful concept here.  

The question becomes why "don't they judge those costs to be worth it"?  Is there motivated reasoning involved?  Almost certainly yes; there always is.

Here are two hypotheses for why they don't judge those costs to be worth it, each one of which is much more plausible to me than the one you proposed:

(1) The costs aren't in fact worth it & they've reacted appropriately to the evidence.
(2) The costs are worth it, but thanks to motivated reasoning, they exaggerate the costs, because writing things up in academic style and then dealing with the publication process is boring and frustrating.

Seriously, isn't (2) a much better hypothesis than the one you put forth about moats?

I'm not necessarily saying people are subconsciously trying to create a moat.  

I'm saying they are acting in a way that creates a moat, and that enables them to avoid competition, and that more competition would create more motivation for them to write things up for academic audiences (or even just write more clearly for non-academic audiences).

It sure sounds like you are saying that though!

Me, reflecting afterwards: hmm... Cynically,[2] not publishing is a really good way to create a moat around your research... People who want to work on that area have to come talk to you, and you can be a gatekeeper.  And you don't have to worry about somebody with more skills and experience coming along and trashing your work or out-competing you and rendering it obsolete...

I'm not accusing anyone of having bad motivations; I think it is almost always valuable to consider both people's concious motivations and their incentives (which may be subconscious (EtA: or indirect) drivers of their behavior).

Before you put in the EtA, it sure sounded like you were saying that people were subconsciously motivated to avoid academic publishing because it helped them build and preserve a moat. Now, after the EtA, it still sounds like that but is a bit more unclear since 'indirect' is a bit more ambiguous than 'subconscious.'

they don't judge those costs to be worth it


Worth it to whom? And if they did work that's valuable, how much of that value is lost if others who could benefit don't see it, because it's written up only informally or not shared widely?

Worth it to the world/humanity/etc. though maybe some of them are more self-focused.

Probably a big chunk of it is lost for that reason yeah. I'm not sure what your point is, it doesn't seem to be a reply to anything I said.

I think that the costs usually are worth it far more often than it occurs, from an outside view - which was David's point, and what I was trying to respond to. I think that it's more valuable than one expects to actually just jump through the hoops. And especially for people who haven't yet ever had any outputs actually published, they really should do that at least once.

(Also, sorry for the zombie reply.)

I love zombie replies.

If you reread this conversation, you'll notice that I never said I think these people are correct. I was just saying that their stated motivations and views are their real motivations and views. 

I actually do agree with you and David Krueger that on the margin more LW types should be investing in making their work publishable and even getting it published. The plan had always been "do research first, then communicate it to the world when the time is right" well now we are out of time so the time is right.

Thanks, reading closely I see how you said that, but it wasn't clear initially. (There's an illusion of disagreement, which I'll christen the "twitter fight fallacy," where unless the opposite is said clearly, people automatically assume replies are disagreements.) 

See also: Your posts should be on Arxiv

I do agree we're leaving lots of value on the table and even causing active harm by not writing things up well, at least for Arxiv, for a bunch of reasons including some of the ones listed here. 

I thought the response to "Your Posts Should be On Arxiv" was "Arxiv mods have stated pretty explicitly they do not want your posts on Arxiv" (unless you have jumped through a bunch of both effort-hoops and formatting hoops to make them feel like a natural member of the Arxiv-paper class)

And I think the post here is saying that you should jump through those effort and editing hoops far more often than currently occurs.

Yeah, I didn't mean to be responding to that point one way or another. It just seemed bad to be linking to a post that (seems to still?) communicate false things, without flagging those false things. (post still says "it can be as easy as creating a pdf of your post", which my impression maybe technically true on rare occasions but basically false in practice?)

This feels like a really adversarial quote. Concretely, the post says:

Sometimes, I think getting your forum post ready for submission can be as easy as creating a pdf of your post (although if your post was written in LaTeX, they'll want the tex file). If everything goes well, the submission takes less than an hour.

However, if your post doesn't look like a research article, you might have to format it more like one (and even then it's not guaranteed to get in, see this comment thread).

This looks correct to me; there are LW posts that already basically look like papers. And within the class of LW posts that should be on arXiv at all, which is the target audience of my post, posts that basically look like papers aren't vanishingly rare.

However, if your post doesn't look like a research article, you might have to format it more like one (and even then it's not guaranteed to get in, see this comment thread).

I interpreted this as saying something superficial about style, rather than "if your post does not represent 100+ hours of research work it's probably not a good fit for archive." If that's what you meant I think the post could be edited to make that more clear.

If the opening section of your essay made it more clear which posts it was talking about I'd probably endorse it (although I'm not super familiar with the nuances of arXiv gatekeeping so am mostly going off the collective response in the comment section)

That seems right.

I wrote this post. I don't understand where your claim ("Arxiv mods have stated pretty explicitly they do not want your posts on Arxiv") is coming from.

I think this point was really overstated. I get the impression the rejected papers were basically turned into the arXiv format as fast as possible and so it was easy for the mods to tell this. However, I've seen submissions to cs.LG like this and this that are clearly from the alignment community. These posts are also not stellar by standards of preprint formatting, and were not rejected, apparently

There have also been plenty of other adapatations, ones which were not low-effort. I worked on 2, the Goodhart's law paper and a paper with Issa Rice on HRAD. Both were very significantly rewritten and expanded into "real" preprints, but I think it was clearly worthwhile.

It's important to internalize that the intellectual world lives in the attention economy, like eveything else.

Just like "content creators" on social platforms think hard about capturing and keeping attention, so do intellectuals and academics.  Clarity and rigor is a part of that.


No one has time, energy, (or crayons, as the saying goes) for half-baked ramblings on a blog or forum somewhere.

I think this post makes an important and still neglected claim that people should write their work more clearly and get it published in academia, instead of embracing the norms of the narrower community they interact with. There has been significant movement in this direction in the past 2 years, and I think this posts marks a critical change in what the community suggests and values in terms of output.