Not sure whether this is the right place to voice technical complaints, but. I am unhappy about the handling of LaTeX macros (on which I rely heavily). Currently it seems like you can add macros either as inline equations or as block equations, and these macros are indeed available in the following equations. However, if an equation object contains only macros, it is invisible and seems to be impossible to edit after creation. As a workaround, I can add some text + macros into the same equation object, but this is very hacky. It would be nice if either equation objects with macros would remain visible, or (probably better) there would be a special "header" in each post where I can put the macros. It would be even more amazing if you could load LaTeX packages in that header, but that's supererogatory.
Another, very serious issue with LaTeX support: When you copy/paste LaTeX objects, the resulting objects are permanently linked. Editing the content of one of them changes the content of another, which is not visible when editing but becomes visible once you save the post. This one made me doubt my sanity for a moment.
Huh, I never ran into that problem. This might turn out to not be super easy to fix since we are using an external LaTeX library, but we can give it a try.
Unsure about whether a header is a good idea, since the vast majority of posts on LW don't have LaTeX, and so for them the header field would just be distracting, but we could add something like that only to agentfoundations, which would be fine. I can look into it. Also curious whether other people have similar problems.
And another problem: if an inline LaTeX object is located in the end of the paragraph, there seems to be no easy way to place the cursor right after the object, unless the cursor is already there (neither the mouse nor the arrow keys help here). So I have to either delete the object and create it again, or write some text in the next paragraph and then use backspace to join the two paragraphs together. This second solution doesn't work if there is also an equation LaTeX object after the end of the first paragraph, in which case you can't use backspace since it would delete the equation object.
As a more general solution, we now support LaTeX in markdown formatted posts and comments. So if you run into a lot of problems like this, it might make sense to go to your user settings and activate the comment markdown editor.
Another issue is that it seems impossible to find or find/replace strings inside the LaTeX.
Also, a "meta" issue: in IAFF, the source of an article was plain text in which LaTeX appeared as "$...$" or "$$...$$". This allowed me to write essays in an external LaTeX editor and then copy into IAFF with a only mild amount of effort. Here, the source seems to be inaccessible. This means that the native editor has to be good because there are no alternatives. Maybe improving the native editor is indeed the best and easiest solution. But an alternative solution could be, somehow enabling work with the source.
Yeah, we are working on improving the markdown editor to support LaTeX. It isn't ready yet, but should be possible at some point in the next few weeks. (You can turn on the Markdown editor in your account settings)
That's nice. Another reason it seems important is, some of content of these essays will eventually make its way into actual papers, and it will be much easier if you can copy-paste big chunks with mild formatting afterwards, compared to having to copy-paste each LaTeX object by hand.
Another issue with LaTeX support: when I mark a block of text that contains LaTeX objects and copy-paste it, the LaTeX becomes an unuseful sort of plain text. I can copy the contents of particular LaTeX object by editing it, but sometimes it is very convenient to copy entire blocks.
This is a bit of a silly bug, but you can fix this by copying two whole blocks of text that contain LaTeX in which case the content gets properly copy-paste (and then you can just delete one of them). It's a silly bug in the MathJax framework we are using, that has to do with how copy-pasting of multiple blocks is handled differently than copy pasting of individual lines.
Another issue is, it seems impossible to delete anything, whether a comment or a draft? (and I guess it goes for posts too?)
You can always move posts back to drafts. We have a plan to add a delete button, but want to make sure there is no way to click it accidentally. If you ping us on Intercom we are also happy to delete posts.
Not deleting comments is intentional, because completely deleting them would make it hard to display the children. You can just edit the content out of them. We are planning to make it so that you can delete your comments that don't have children, but haven't gotten around to it.
We've just launched the beta for AlignmentForum.org.
Much of the value of LessWrong has come from the development of technical research on AI Alignment. In particular, having those discussions be in an accessible place has allowed newcomers to get up to speed and involved. But the alignment research community has at least some needs that are best met with a semi-private forum.
For the past few years, agentfoundations.org has served as a space for highly technical discussion of AI safety. But some aspects of the site design have made it a bit difficult to maintain, and harder to onboard new researchers. Meanwhile, as the AI landscape has shifted, it seemed valuable to expand the scope of the site. Agent Foundations is one particular paradigm with respect to AGI alignment, and it seemed important for researchers in other paradigms to be in communication with each other.
So for several months, the LessWrong and AgentFoundations teams have been discussing the possibility of using the LW codebase as the basis for a new alignment forum. Over the past couple weeks we've gotten ready for a closed beta test, both to iron out bugs and (more importantly) get feedback from researchers on whether the overall approach makes sense.
The current features of the Alignment Forum (subject to change) are:
We’ve currently copied over some LessWrong posts that seemed like a good fit, and invited a few people to write posts today. (These don’t necessarily represent the longterm vision of the site, but seemed like a good way to begin the beta test)
This is a fairly major experiment, and we’re interested in feedback both from AI alignment researchers (who we’ll be reaching out to more individually in the next two weeks) and LessWrong users, about the overall approach and the integration with LessWrong.