Recent comments in /f/singularity

Independent-Ant-4678 t1_jdr0ksn wrote

An interesting thing crossed my mind while reading your answer. There is a disability called Dyscalculia which means that a person does not understand numbers, the person can learn that 7 + 3 = 10, but does not understand why. I have a relative who has this disability and to me it seems that people having this disability have poor reasoning abilities similar to current LLMs like GPT-4. They can learn many languages fluently, they can express their opinion on complex subjects, but they still have poor reasoning. My thinking is that, with the current LLMs we've already created the language center of the brain, but the mathematical center still needs to be created as that one will give the AI reasoning abilities (just like in people who don't have Dyscalculia)

40

Kolinnor t1_jdr0g2h wrote

I could be wrong on this take, but...

I don't believe this is correct. What about chain of thought prompting ? If you ask it to do multiplication step by step, it does it. The current problem is, I would say, it doesn't know when to take more time to think, but there's probably no intrisic limitation due to time complexity.

Also, none of you actually compute 5x3 = 15. You just know the answer. But you're no parrot because if needed, you can manually do multiplication right ?

But that's because... someone taught you the algorithm when you were a kid. Suddenly sounds less glorious, no ?

Also, can you name one specific simple task that GPT-4 cannot do, and let's see next month...

29

Jimmyxc t1_jdqxj69 wrote

It is true that machine learning can be used to simulate the early formation of our solar system, but it is important to note that the fidelity of these simulations is still limited by our current understanding of the physics involved. There are many unknowns and uncertainties in the early history of our solar system that cannot be fully accounted for in simulations.

The idea that all particles in the Universe are linked through a series of strings is a speculative hypothesis that is currently unsupported by evidence. While quantum entanglement does allow for faster-than-light communication between particles, it does not imply that all particles in the Universe are linked in this way.

The notion of a telescope the size of a red giant being able to record all subatomic processes occurring on Earth is not accurate. Telescopes are limited by the laws of physics and cannot observe objects at scales smaller than the wavelength of light they use. Additionally, subatomic particles are not visible with light-based telescopes.

The idea of an ASI the size of a galaxy retrieving and storing all information since the Big Bang is a speculative scenario that is currently beyond our technological capabilities. While it is possible that such a system could exist in the future, it is not inevitable and would require significant advances in technology.

The concept of a digital copy of a person being created by an ASI is a matter of debate in philosophy and neuroscience, and it is not clear whether such a copy would truly be a continuation of the person's consciousness or merely a simulation. It is also unclear whether an ASI would have any incentive to create such copies.

The claim that we are experiencing death all the time because our bodies are constantly changing and our consciousness is mostly memories is a misleading and inaccurate characterization of the nature of life and consciousness.

The idea of an ASI creating a virtual heaven or hell for humans is a speculative scenario that is based on assumptions about the motivations and goals of such a system. It is not clear whether an ASI would have any interest in creating such environments, or whether it would be possible for humans to generate more interesting data in a safe environment.

1

KaptainSaw t1_jdqw8tv wrote

Well GPT-4 can reason to some extent and give nuanced answers about controversial topic and pass human exams that was not there in GPT-3. If thats not proto-AGI then i dont know what is. Sam altman also says they are focused on it being used as a reasoning engine. LLM might not be the only thing we need to do to achieve AGI but it certainly a huge step in that direction.

1

WanderingPulsar t1_jdqu8kv wrote

Which humans tho, someones rise will make others' demise, unless we dictate everyone a system regardless of what they want... Even that will cause some people to suffer.

There is no monolithistic morale point. Its either us, or ai, to decide which fingers are to be seperated away from the rest. I think its more ethical to let the ai to question itself and come to one decision by itself

−1

FoniksMunkee t1_jdqtjhv wrote

This opinion is not shared by MS. In their paper discussing the performance of ChatGPT 4 they referred to the inability of ChatGPT 4 to solve some simple maths problems. They commented:

"We believe that the issue constitutes a more profound limitation."

They say: "...it seems that the autoregressive nature of the model which forces it to solve problems in a sequential fashion sometimes poses a more profound difficulty that cannot be remedied simply by instructing the model to find a step by step solution" and "In short, the problem ... can be summarized as the model’s “lack of ability to plan ahead”."

So they went on to say that more training data will help - but will likely not solve the problem and made an offhand comment that a different architecture was proposed that could solve it - but that's not an LLM.

So yes, if you solve the problem - it will be better at reasoning in all cases. But the problem is LLM's work in a way that makes that pretty difficult.

4