Recent comments in /f/singularity

FoniksMunkee t1_jdqt5ci wrote

Regarding 2. MS says - "We believe that the ... issue constitutes a more profound limitation."

They say: "...it seems that the autoregressive nature of the model
which forces it to solve problems in a sequential fashion sometimes poses a more profound difficulty that cannot be remedied simply by instructing the model to find a step by step solution" and "In short, the problem ... can be summarized as the model’s “lack of ability to plan ahead”."

Notably, MS did not provide a solution for this - and pointed at another paper by LeCun that suggests a non LLM model to solve the issue. Which is not super encouraging.

2

inigid t1_jdqt2z1 wrote

one thing I have thought about is the primary school experience that children are put through isn't really present with the online corpus.

we sit through days, weeks and months of 1 + 1 is 2, 2 + 2 is 4, 3 + 3 is 6 before we even go on to weeks of multiplication and division even.

These training sessions are done at a very young age and form a mathematical core model.

I think we would struggle being shown a Wikipedia page on how to do multiplication without having got the muscle memory of the basics internalized first

3

Jeffy29 t1_jdqsp3q wrote

If you think AGI/ASI will lead to utopia or something close to it, then I would say we are one of the last unlucky humans to be born before the singularity compared to billions of years humans (or their successors) will be born afterwards. Was the invention of steam engine incredibly transformative and cool moment for the history of our species? Absolutely. Would I prefer to be living then? Sure as hell not. Likewise if I had a choice I would pick being born 200 years from now, maybe they would not be experiencing big technological leaps, but life would be better.

2

Smart-Tomato-4984 t1_jdqsjmp wrote

And it would be much better if we did not reproduce, but we should expect 105 billion more people to be born before we realize that filling the galaxy with human descendants would result in a tragedy of the galactic commons and an ecology of stronger civilization eating weaker ones, due to evolution by natural and mimetic selection.

1

ArcticWinterZzZ t1_jdqsh5c wrote

None of the other posters have given the ACTUAL correct answer, which is that an LLM set up like GPT-4 can never actually be good at maths for the simple fact that GPT-4 runs in O(1) time when asked to perform mental math and the minimum theoretical time complexity for multiplication is O(n*log(n)). It is impossible for GPT-4 to be good at mathematics because it would breach the laws of physics.

At minimum, GPT-4 needs space to actually calculate its answer.

60

FoniksMunkee t1_jdqs9x9 wrote

It's a limitation of LLM's as they currently stand. They can't plan ahead, and they can't backtrack.

So a human doing a problem like this would start, see where they get to, perhaps try something else. But LLM's can't. MS wrote a paper on the state of ChatGPT4 and they made this observation about why LLM's suck at math.

"Second, the limitation to try things and backtrack is inherent to the next-word-prediction paradigm that the model operates on. It only generates the next word, and it has no mechanism to revise or modify its previous

output, which makes it produce arguments “linearly”. "

They argue too that the model was probably not trained on as much mathematical data as code - and more training will help. But they also said the issue above "...constitutes a more profound limitation.".

6

Smart-Tomato-4984 t1_jdqrzrf wrote

>A superintelligent AI could for sure bring back people from the past.

I don't think there is enough matter in the reachable universe to make a computer that big. It's not millions of possible minds. It's a near infinity of possible minds. Also, you murdered all the other minds you tested out and than didn't go with.

1

No_Ninja3309_NoNoYes t1_jdqrqpf wrote

Yes, well, I have no problems with violence against violent criminals. Obviously the same goes for genocidal individuals. However, who is qualified to make this call? I don't think AI should do it. Even ASI. But of course AGI by some definition should be able to do it. I find the idea unacceptable. Not that humans do such a great job, but you have to draw the line somewhere.

1

acutelychronicpanic t1_jdqrppa wrote

Probably not? At least not any public models I've heard of. If you had a model architecture design AI that was close to that good, you'd want to keep the secret sauce to yourself and use it to publish other research or develop products.

LLMs show absolutely huge potential for being a conductor or executive that coordinates smaller modules. The plug-ins coming to ChatGPT are the more traditional software version of this. How long until an LLM can determine it needs a specific kind of machine learning model to understand something and just cooks up and architecture and can choose appropriate data?

2

Smart-Tomato-4984 t1_jdqr5cl wrote

My thoughts exactly.

>"Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work." - Sparks of Artificial General Intelligence: Early experiments with GPT-4

Not good. It turns out we can seemingly have pretty good oracle AGI, and they are screwing it up trying to make it dangerous. Why? Why would we want it to have it's own agency?

3

throwawaydthrowawayd t1_jdqqsur wrote

> For the chatbot thing, why can't the LLM generate a non-displayed output, "test it", and try again

You can! There are systems designed around that. OpenAI even internally had GPT-4 using a multi-stage response system (a read-execute-print loop, they called it) while testing, to give it more power. There is also the "Reflexion" posts on this sub lately, where they have GPT-4 improve on its own writing. But, A, it's too expensive. Using a reflective system means lots of extra words, and each word costs more electricity.

And B, LLMs currently love to get sidetracked. They use the word "hallucinations" to say that the LLM just starts making things up, or acting like you asked a different question, or many other things. Adding an internal thought process dramatically increases the chances of LLMs going off the rails. There are solutions to this (usually, papers on it will describe their solutions as "grounding" the AI), but once again, they cost more money to do.

So that's why all these chatbots aren't as good as they could be. it's just not worth the electricity to them.

5

HumanSeeing t1_jdqoznm wrote

Lucky also assumes that the singularity will automatically go well for humans. So i disagree with the assumption that OP makes that it will be a great thing for humans by default. It can also go wrong for us, even if due to indifference. This technology has enormous potential in any direction to change existence forever. It is way more difficult to make it really good than bad.

But i hope i am wrong and i hope the way we would build these things will make it easy to align them. But from another point of view we can also argue just about linguistics. I have no problem someone saying they are lucky to be alive today to have access to the medicine that we have or whatever.

But lucky yea, is an abstract human concept. Saying specifically that we are lucky to experience a singularity almost assumes as if there was nothing existing in the universe. And then a lottery happened to choose what era will be brought into existence. And this time was chosen and now we are here. When yea, thats not how this works.

0

HumanSeeing t1_jdqoad5 wrote

A superintelligent AI could for sure bring back people from the past. The more data about them the better. But if you like this particular artist, the AI could analyze some life shows and the body language and tone of voice. Simulate millions of possible minds and find one who would act exactly like that and boom, there you have it.

7

HumanSeeing t1_jdqnpsq wrote

I second this! What connection should quantum entanglement and alien life in the universe have? I sort of assume what you mean, but then you understand entanglement wrong.

Why is the speed of light not instant? Speed of light is instant, from the point of view of light itself. Its moment of birth and death are the same moment. Light being born in the center of the sun and reaching your eye, that is all the same moment for it. Light experiences no time. Really fascinating trippy stuff.

I do agree that it is suuper unlikely for a universe like ours to exist. People can make whatever arguments they want. But the universe is fine tuned for life. Not saying by some intelligent entity or something. But that the laws of physics, every one of them, work together to make all this possible. It is wild that even the periodic table of elements and chemistry is possible at all.
That is is possible for stars to shine.

So.. by now i think the most likely and to me obvious answer is that we live in a multiverse. That there are infinite amount of different universes each with different possible laws of physics. And we just happen to be in one that is supportive of life. Id imagine the vast majority of possible universes are just energy and particles flying around and that's it.

If anyone has any other hypothesis besides the multiverse i would love to hear it. But then you need to explain this cosmic coincidence of why the laws of physics are exactly the way they are, set in exactly this way to allow for life.

And yea.. a way to think about life is like a game i agree. Don't take it too seriously. Altho we pretend we are all doing super serious life stuff and wearing suits and going to meetings. As if that had any significance at all in the bigger picture. We are just tubes who find food to put in one end to poop it out the other end, but remember, super serious!

5