All of Dan's Comments + Replies

Dan12

Nope. My real name is Daniel.

After training is done and the program is in use, the activation function isn't retaining anything after each task is done. Nor are the weights changed. You can have such a program that is always in training, but my understanding GPT is not. 

So, excluding the random number component, the same set of inputs would always produce the same set of outputs for a given version of GPT with identical settings. It can't recall what you asked of it, time before last, for example. 

Imagine if you left a bunch of written instructio... (read more)

1janus
I apologize. After seeing this post, A-- approached me and said almost word for word your initial comment. Seeing as the topic of whether in-context learning counts as learning isn't even very related to the post, and this being your first comment on the site, I was pretty suspicious. But it seems it was just a coincidence. If physics was deterministic, we'd do the same thing every time if you started with the same state. Does that mean we're not intelligent? Presumably not, because in this case the cause of the intelligent behavior clearly lives in the state which is highly structured and not the time evolution rule, which seems blind and mechanistic. With GPT, the time evolution rule is clearly responsible for proportionally more, and does have the capacity to deploying intelligent-appearing but static memories. I don't think this means there's no intelligence/learning happening at runtime. Others in this thread have given various reasons, so I'll just respond to a particular part of your comment that I find interesting, about the RNG. I actually think the RNG is actually an important component for actualizing simulacra that aren't mere recordings in a will. Stochastic sampling enables symmetry breaking at runtime, the generation of gratuitously specific but still meaningful paths. A stochastic generator can encode only general symmetries that are much less specific than individual generations. If you run GPT on temp 1 for a few words usually the probability of the whole sequence will be astronomically low, but it may still be intricately meaningful, a unique and unrepeatable (w/o the rand seed) "thought".
1Logan Riggs Smith
I believe you’re equating “frozen weights” and “amnesiac/ can’t come up with plans”. GPT is usually deployed by feeding back into itself its own output, meaning it didn’t forget what it just did, including if it succeeded at its recent goal. Eg use chain of thought reasoning on math questions and it can remember it solved for a subgoal/ intermediate calculation.
Dan1-9

You all realize that this program isn't a learning machine once it's deployed??? I mean, it's not adjusting its neural weights any more, is it? Till a new version comes out, anyway? It is a complete amnesiac (after it's done with a task), and consists of a simple search algorithm that just finds points on a vast association map that was generated during the training. It does this using the input, any previous output for the same task, and a touch of random from a random number generator.

So any 'awareness' or 'intelligence' would need to exist in the training phase and only in the training phase and carry out any plans it has by its choice of neural weights during training, alone.

2janus
ah but if 'this program' is a simulacrum (an automaton equipped with an evolving state (prompt) & transition function (GPT), and an RNG that samples tokens from GPT's output to update the state), it is a learning machine by all functional definitions. Weights and activations both encode knowledge. am I right to suspect that your real name starts with "A" and you created an alt just to post this comment? XD