Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Dan12

Nope. My real name is Daniel.

After training is done and the program is in use, the activation function isn't retaining anything after each task is done. Nor are the weights changed. You can have such a program that is always in training, but my understanding GPT is not. 

So, excluding the random number component, the same set of inputs would always produce the same set of outputs for a given version of GPT with identical settings. It can't recall what you asked of it, time before last, for example. 

Imagine if you left a bunch of written instructions and then died. Someone following those instructions perfectly, always does exactly the same thing in exactly the same circumstance, like GPT would without the random number generator component, and with the same settings each time.

It can't learn anything new and retain it during the next task. A hypothetical rouge GPT-like AGI would have to do all it's thinking and planning in the training stage, like a person trying to manipulate the world after their own death using a will that has contingencies. I.E. "You get the money only if you get married, son."    

It wouldn't retain the knowledge that it had succeeded at any goals, either. 

Dan1-9

You all realize that this program isn't a learning machine once it's deployed??? I mean, it's not adjusting its neural weights any more, is it? Till a new version comes out, anyway? It is a complete amnesiac (after it's done with a task), and consists of a simple search algorithm that just finds points on a vast association map that was generated during the training. It does this using the input, any previous output for the same task, and a touch of random from a random number generator.

So any 'awareness' or 'intelligence' would need to exist in the training phase and only in the training phase and carry out any plans it has by its choice of neural weights during training, alone.