In the technical sense that you can implement arbitrary programs by prompting an LLM (they are turning complete), sure.
In a practical sense, no.
GPT-4 can't even play tic-tac-toe. Manifold spent a year getting GPT-4 to implement (much less discover) the algorithm for Sudoku and failed.
Now imagine trying to implement a serious backtracking algorithm. Stockfish checks millions of positions per turn of play. The attention window for your "backtracking transformer" is going to have to be at lease {size of chess board state}*{number of position...
Obvious bait is obvious bait, but here goes.
Transformers are not AGI because they will never be able to "figure something out" the way humans can.
If a human is given the rules for Sudoku, they first try filling in the square randomly. After a while, they notice that certain things work and certain things don't work. They begin to define heuristics for things that work (for example, if all but one number appears in the same row or column as a box, that number goes in the box). Eventually they work out a complete algorithm for solving Sudok...
"reality is large" is a bad objection.
It's possible in principle to build a simulation that is literally indistinguishable from reality. Say we only run the AI in simulation for 100million years, and there's a simulation overhead of 10x. That should cost (100e6 ly)**3*(100e6 years) * 10 of our future lightcone. This is a minuscule fraction of our actual future lightcone (9.4e10 ly) * (10^15 y)
A few better objections:
Simulating a universe with a paperclip maximizer in it means simulating billions of people being murdered and turned into pa...
Let's take a concrete example.
Assume you have an AI that could get 100% on every Putnam test, do you think it would be reasonable or not to assume such an AI would also display superhuman performance at solving the Yang-Mills Mass Gap?
This doesn't include working out advances in fundamental physics, or designing a fusion reactor, or making breakthroughs in AI research.
Why don't all of these fall into the self-play category? Physics, software and fusion reactors can all be simulated.
I would be mildly surprised if a sufficiently large language model couldn't solve all of Project Euler+Putnam+MATH dataset.
I strongly doubt we live in a data-limited AGI timeline
I’m not quite sure what you mean here.
In the standard picture of a reinforcement learner, suppose you get to specify the reward function and i get to specify the "agent". No matter what reward function you choose, I claim I can make an agent that both: 1) gets a huge reward compared to some baseline implementation 2) destroys the world. In fact, I think most "superintelligent" systems have this property for any reward function you could specify using current ML techniques.
Now switch the order, I design the agent first and ask you for an a...
What loss function(s), when sent into a future AI’s brain-like configuration of neocortex / hippocampus / striatum / etc.-like learning algorithms, will result in an AGI that is definitely not trying to literally exterminate humanity?
Specifying a correct loss functions is not the right way to think about the Alignment Problem. A system's architecture matters much more than its loss function for determining whether or not it is dangerous. In fact, there probably isn't even a well-defined loss function that would remain aligned un...
I think you're confounding two questions:
I think it's pretty clear that AIHHAI accelerates AI development (without Copilot, I would have to write all those lines myself).
However, I think that observing AIHHAI should actually update your priors towards Slow Takeoff (or at least Moderate Takeoff). One reason is because humans are inherently slower than machines, and as Amdahl reminds us if something is composed of a slow thing and a fast thing...
Absolutely. I don't think it's impossible to build such a system. In fact, I think a transformer is probably about 90% there. Need to add trial and error, some kind of long-term memory/fine-tuning and a handful of default heuristics. Scale will help too, but no amount of scale alone will get us there.