Planned summary for the Alignment Newsletter:
AI Safety Papers is an app to interactively explore a previously collected <@database of AI safety work@>(@TAI Safety Bibliographic Database@). I believe it contains every article in this newsletter (at least up to a certain date; it doesn’t automatically update) along with their summaries, so you may prefer to use that to search past issues of the newsletter instead of the [spreadsheet I maintain](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit#gid=0).
Note: I've moved this to the Alignment Forum.
Requests, if they are easy:
My user experience
When I first load the page, I am greeted by an empty space.
From here I didn't know what to look for, since I didn't remember what kind of things where in the database.
I tried clicking on table to see what content is there.
Ok, too much information, hard to navigate.
I remember that one of my manuscripts made it to the database, so I look up my surname
That was easy! (and it loaded very fast)
The interface is very neat too. I want to see more papers, so I click on one of the tags.
I get what I wanted.
Now I want to find a list of all the tags. Hmmm I cannot find this anywhere.
I give up and look at another paper:
Oh cool! The Alignmnet Newsletter summary is really great. Whenever I read something in Google Scholar it is really hard to find commentary on any particular piece.
I tried now to look for my current topic of research to find related work
Meh, not really anything interesting for my research.
Ok, now I want to see if Open AI's "AI and compute" post is in the dataset:
Huhhh it is not here. The bitter lesson is definitely relevant, but I am not sure about the other articles.
Can I search for work specific to open ai?
Hmm that didnt quite work. The top result is from OpenAI, but the rest are not.
Maybe I should spell it different?
Oh cool that worked! So apparently the blogpost is not in the dataset.
Anyway, enough browsing for today.
Alright, feedback:
AI Safety Papers is a website to quickly explore papers around AI Safety. The code is hosted on Github here.
In December 2020, Jess Riedel and Angelica Deibel announced the TAI Safety Bibliographic Database. At the time, they wrote:
One significant limitation of this system was that there was no great frontend for it. Tabular data and RDF can be useful for analysis, but difficult to casually go through.
We’ve been experimenting with creating a web frontend to this data. You can see this at http://ai-safety-papers.quantifieduncertainty.org.
This system acts a bit like Google Scholar or other academic search engines. However, the emphasis on AI-safety related papers affords a few advantages.
Tips
Questions
Who is responsible for AI Safety Papers?
Ozzie Gooen has written most of the application, on behalf of the Quantified Uncertainty Research Institute. Jess Riedel, Angelica Deibel, and Nuño Sempere have all provided a lot of feedback and assistance.
How can I give feedback?
Please either leave comments, submit feedback through this website, or contact us directly at hello@quantifieduncertainty.org.
How often is the database updated?
Jess Riedel and Angelica Deibel are maintaining the database. They will probably update it every several months or so, depending on interest. We’ll try to update the AI Safety Papers app accordingly. The date of the most recent data update is shown in the header of the app.
Note that the most recent data in the current database is from December 2020.
Future Steps
This app was made in a few weeks, and as such it has a lot of limitations.
You can see several other potential features here. Please feel free to add suggestions or upvotes.
We’re not sure if or when we’ll make improvements to AI Safety Papers. If there is substantial use or requests for improvements, that will carry a lot of weight regarding our own prioritization. Of course, people are welcome to submit pull requests to the Github repo directly, or simply fork the project there.