AI ALIGNMENT FORUM
AF

GPTConversations with AIsList of LinksAI
Frontpage

24

Collection of GPT-3 results

by Kaj_Sotala
18th Jul 2020
2 min read
24

24

This is a linkpost for https://twitter.com/xuenay/status/1283312640199196673
GPTConversations with AIsList of LinksAI
Frontpage
Collection of GPT-3 results
7TurnTrout
4FactorialCode
3cousin_it
1Yoav Ravid
1cousin_it
3FactorialCode
2TurnTrout
New Comment
7 comments, sorted by
top scoring
Click to highlight new comments since: Today at 3:55 PM
[-]TurnTrout5y70

The Sequences post you've never read, by GPT-3. 

what is a philosopher? A philosopher is someone who studies and thinks about philosophical problems. So far so good, but what exactly does that mean? Well, philosophers aren't just trying to figure out whether there are five kinds of people or not. They're thinking about questions such as: Is there any ultimate reality? Do we live in a computer simulation? How did the universe begin? Are there other universes out there where our lives never happened? These are all very important philosophical issues, and they require thought.

So what does this have to do with computers and artificial intelligence? Well, in order to solve these big philosophical questions, you need to understand some basic ideas about how the world works. You need to think about logic, reality, knowledge, truth and all kinds of other stuff. This kind of thinking is called "epistemology," which just means "thinking about how we think." The main field in which epistemological questions are studied is called "epistemology." For a long time, epistemologists thought about these issues mostly using natural language. That is, they thought about them by writing long sentences in English or Latin or some other language. Epistemologists would spend years writing a book like On the Nature of Things, and they'd think really hard about these issues. Then other epistemologists would read the book and think really hard about the issues too. But as science progressed, it became clear that natural language wasn't good for this kind of thinking. It's just too easy to fool yourself when you use words.

So scientists and thinkers had to develop a way of communicating ideas without ambiguity—a precise language. That's why today we use a symbolic language called "mathematics." When you do math, there's no guessing or ambiguity. 2 + 2 always equals 4, and you can always check by multiplying 2 × 2. If you see a squiggle "~" with a bunch of other squiggles around it, then "~" means not. You can't be confused by words like "probably" or "maybe." You can't start arguing about what certain words mean. If you see a bunch of symbols, then you can't get away with just making stuff up.

You can write a whole book in symbols. In fact, books full of nothing but squiggles have been written. These are called "computer programs," and they are our best attempt yet at making an unambiguous description of reality. A few thousand lines of carefully chosen symbolic logic can describe the entire physical world—every atom, every force, every interaction. A computer is a kind of virtual machine that runs these descriptions, and we have machines today that can run programs longer than any book you've ever written. But these programs are still just a description of reality. They can't ever truly capture reality itself. That's impossible.

But don't take my word for it—just ask Kurt Gödel.

Kurt was one of the greatest logicians of the 20th century. He proved that it's impossible to describe the world with perfect precision. Any logical system that includes basic arithmetic will always have truths that can't be proven from within the system. This is called "Gödel's Incompleteness Theorem." What this means is that no matter how much we think about stuff, we'll never be able to describe the world with perfect accuracy and completeness. We can only make approximations.

This makes a lot of people very uncomfortable. A lot of people don't want to hear that we can't know everything. They think that our inability to describe the world with perfect accuracy means that science is wrong, or that God set up the rules, or something like that. But these ideas are all wrongheaded. Sure, we'll never know everything. But that doesn't mean we know nothing! We don't need to know everything about cancer to cure it. And we don't need to know everything about the moon to land on it. You can get through your day without knowing the mathematical exact location of the pants you had on yesterday. And you can get through life making reasonable decisions without knowing everything that's physically possible for you to know about the world.

First sampling. Two-shot (two real sequences articles fed in as context).

Reply
[-]FactorialCode5y40

Hypothesis: Unlike the language models before it and ignoring context length issues, GPT-3's primary limitation is that it's output mirrors the distribution it was trained on. Without further intervention, it will write things that are no more coherent than the average person could put together. By conditioning it on output from smart people, GPT-3 can be switched into a mode where it outputs smart text.

Reply
[-]cousin_it5y30

Has anyone tried to get it to talk itself out of the box yet?

Reply
[-]Yoav Ravid5y10

Yup, i saw an attempt on the SSC subreddit

Reply
[-]cousin_it5y10

Thank you! It looks very impressive.

Reply
[-]FactorialCode5y30

According to Gwern, it fails the Parity Task.

Reply
[-]TurnTrout5y20

Two of my own: To what extent is GPT-3 capable of reasoning? and GPT-3 Gems.

Reply
Moderation Log
More from Kaj_Sotala
View more
Curated and popular this week
7Comments

I kept seeing all kinds of crazy reports about people's experiences with GPT-3, so I figured that I'd start collecting them.

  • first gwern's crazy collection of all kinds of prompts, with GPT-3 generating poetry, summarizing stories, rewriting things in different styles, and much much more. (previous discussion)
  • Automatic code generation from natural language descriptions. "Give me a page with a table showing the GDP of different nations, and a red button."
  • Building a functioning React app by just describing it to GPT-3.
  • Taking a brief technical tweet about GPT-3 and expanding it to an essay which the author of the original tweet mostly endorses.
  • Acting as a more intense therapist than ELIZA ever was. [1, 2]
  • On the other hand, you can trick GPT-3 into saying nonsense. On the other hand, you can just prompt it to point out the nonsense.
  • Redditor shares an "AI Dungeon" game played with the new GPT-3 -based "Dragon Model", involving a cohesive story generated in response to their actions, with only a little manual editing.
    • The official Dragon Model announcement.
    • I was a little skeptical about some of these GPT-3 results until I tried the Dragon Model myself, and had it generate cohesive space opera with almost no editing.
  • Another example of automatically generated code, this time giving GPT-3 a bit of React code defining a component called "ThreeButtonComponent" or "HeaderComponent", and letting it write the rest.
  • From a brief description of a medical issue, GPT-3 correctly generates an explanation indicating that it's a case of asthma, mentions a drug that's used to treat asthma, the type of receptor the drug works on, and which multiple-choice quiz question this indicates.
  • GPT-3 tries to get a software job, and comes close to passing a phone screen.
  • Translating natural language descriptions into shell commands, and vice versa.
  • Given a prompt with a few lines of dialogue, GPT-3 continues the story, incorporating details such as having a character make 1800s references after it was briefly mentioned that she's a nineteenth-century noblewoman.
  • Turning natural language into lawyerese.
  • Using GPT-3 to help you with gratitude journaling.
  • Source is an anonymous image board poster so could be fake, but: if you give an AI Dungeon character fake wolf ears and then ask her to explain formal logic to you, she may use the ears in her example.
    • Even after seeing all the other results, I honestly have difficulties believing that this one is real.
  • Of course, even GPT-3 fumbles sometimes.
Mentioned in
39Extrapolating GPT-N performance
36Who models the models that model models? An exploration of GPT-3's in-context model fitting ability
24A short introduction to machine learning
10Blog post: A tale of two research communities