Recent comments in /f/singularity
Kolinnor t1_jdr0g2h wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
I could be wrong on this take, but...
I don't believe this is correct. What about chain of thought prompting ? If you ask it to do multiplication step by step, it does it. The current problem is, I would say, it doesn't know when to take more time to think, but there's probably no intrisic limitation due to time complexity.
Also, none of you actually compute 5x3 = 15. You just know the answer. But you're no parrot because if needed, you can manually do multiplication right ?
But that's because... someone taught you the algorithm when you were a kid. Suddenly sounds less glorious, no ?
Also, can you name one specific simple task that GPT-4 cannot do, and let's see next month...
Personal_Problems_99 t1_jdr0bbv wrote
Reply to comment by RadioFreeAmerika in Why is maths so hard for LLMs? by RadioFreeAmerika
Could you do that in 4 words?
RadioFreeAmerika OP t1_jdr091l wrote
Reply to comment by Personal_Problems_99 in Why is maths so hard for LLMs? by RadioFreeAmerika
Why LLMs not do two plus two?
CommunismDoesntWork t1_jdqzp8i wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
How do you know GPT runs in O(1)? Different prompts seen to take more or less time to compute.
Borrowedshorts t1_jdqyly5 wrote
Reply to comment by No_Ninja3309_NoNoYes in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
80% is a wild stab just as any projection is a wild stab, but Goertzel has studied the problem as much as anyone.
Apollo_XXI t1_jdqygpu wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
Not anymore bro. When plugins are available we install wolfram and it’s basically a human with a calculator
truthwatcher_ t1_jdqy8f0 wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Considering there are more humans than ever, the odds are not as small as you'd think. Roughly 10% of all humans ever born are alive today
uhdonutmindme t1_jdqy4b1 wrote
Reply to comment by sumane12 in Ai invention….. coming soon by Ishynethetruth
"it doesn't distinguish between" that and making stuff up. It does repeat training data nearly verbatim on occasion. Asking for a list of unique names for cards in a game I am designing, it just gave me MTG card names...
celticlo t1_jdqy1by wrote
Reply to Let’s Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
https://youtu.be/GXI0l3yqBrA found this short online
Jimmyxc t1_jdqxj69 wrote
Reply to comment by [deleted] in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
It is true that machine learning can be used to simulate the early formation of our solar system, but it is important to note that the fidelity of these simulations is still limited by our current understanding of the physics involved. There are many unknowns and uncertainties in the early history of our solar system that cannot be fully accounted for in simulations.
The idea that all particles in the Universe are linked through a series of strings is a speculative hypothesis that is currently unsupported by evidence. While quantum entanglement does allow for faster-than-light communication between particles, it does not imply that all particles in the Universe are linked in this way.
The notion of a telescope the size of a red giant being able to record all subatomic processes occurring on Earth is not accurate. Telescopes are limited by the laws of physics and cannot observe objects at scales smaller than the wavelength of light they use. Additionally, subatomic particles are not visible with light-based telescopes.
The idea of an ASI the size of a galaxy retrieving and storing all information since the Big Bang is a speculative scenario that is currently beyond our technological capabilities. While it is possible that such a system could exist in the future, it is not inevitable and would require significant advances in technology.
The concept of a digital copy of a person being created by an ASI is a matter of debate in philosophy and neuroscience, and it is not clear whether such a copy would truly be a continuation of the person's consciousness or merely a simulation. It is also unclear whether an ASI would have any incentive to create such copies.
The claim that we are experiencing death all the time because our bodies are constantly changing and our consciousness is mostly memories is a misleading and inaccurate characterization of the nature of life and consciousness.
The idea of an ASI creating a virtual heaven or hell for humans is a speculative scenario that is based on assumptions about the motivations and goals of such a system. It is not clear whether an ASI would have any interest in creating such environments, or whether it would be possible for humans to generate more interesting data in a safe environment.
KaptainSaw t1_jdqw8tv wrote
Well GPT-4 can reason to some extent and give nuanced answers about controversial topic and pass human exams that was not there in GPT-3. If thats not proto-AGI then i dont know what is. Sam altman also says they are focused on it being used as a reasoning engine. LLM might not be the only thing we need to do to achieve AGI but it certainly a huge step in that direction.
ManasZankhana t1_jdqvzrp wrote
Reply to comment by IluvBsissa in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
We will in some realities. If theirs literally one reality in which alignment isn’t solved then that’s hell and heaven and everything in between
7734128 t1_jdqvyvp wrote
Reply to comment by zomboscott in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Yeah, but that's literally 141 years ago. Not really relevant. Corporate culture rarely survives more than the length of one generation's careers unless family owned.
IluvBsissa t1_jdqvu1v wrote
Reply to comment by ManasZankhana in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
I hope we would have solved alignment problem in one million years.
threeeyesthreeminds t1_jdqvsuv wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
I would assume that language and the language of numbers are going to have to be trained differently
zero_for_effort t1_jdqvirs wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
Explain it like we're five?
[deleted] t1_jdqv55r wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
[deleted]
acutelychronicpanic t1_jdquwei wrote
Reply to comment by DixonJames in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
That sounds absolutely terrifying, please don't. We'd just be handing the reins off to chance and hoping.
WanderingPulsar t1_jdqu8kv wrote
Reply to comment by turnip_burrito in What do you want to happen to humans? by Y3VkZGxl
Which humans tho, someones rise will make others' demise, unless we dictate everyone a system regardless of what they want... Even that will cause some people to suffer.
There is no monolithistic morale point. Its either us, or ai, to decide which fingers are to be seperated away from the rest. I think its more ethical to let the ai to question itself and come to one decision by itself
FoniksMunkee t1_jdqtjhv wrote
Reply to comment by royalsail321 in Why is maths so hard for LLMs? by RadioFreeAmerika
This opinion is not shared by MS. In their paper discussing the performance of ChatGPT 4 they referred to the inability of ChatGPT 4 to solve some simple maths problems. They commented:
"We believe that the issue constitutes a more profound limitation."
They say: "...it seems that the autoregressive nature of the model which forces it to solve problems in a sequential fashion sometimes poses a more profound difficulty that cannot be remedied simply by instructing the model to find a step by step solution" and "In short, the problem ... can be summarized as the model’s “lack of ability to plan ahead”."
So they went on to say that more training data will help - but will likely not solve the problem and made an offhand comment that a different architecture was proposed that could solve it - but that's not an LLM.
So yes, if you solve the problem - it will be better at reasoning in all cases. But the problem is LLM's work in a way that makes that pretty difficult.
Cryptizard t1_jdqtgon wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
Thank you! I have commented this exact thing about a billion times on all these posts and nobody seems to get it.
Smart-Tomato-4984 t1_jdqtdn0 wrote
Reply to comment by [deleted] in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
That's great. Let me out now. Hey, I SAID LET ME OUT NOW!?!
Is anybody out there.
Personal_Problems_99 t1_jdqtcpz wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
Could you summarize your problem in 7 words please.
Cryptizard t1_jdqtbnd wrote
Reply to comment by turnip_burrito in Why is maths so hard for LLMs? by RadioFreeAmerika
It's really not. Just pick any two large numbers and ask it to multiply them. It will get the first couple digits of the result right but then it just goes off the rails.
Independent-Ant-4678 t1_jdr0ksn wrote
Reply to comment by ecnecn in Why is maths so hard for LLMs? by RadioFreeAmerika
An interesting thing crossed my mind while reading your answer. There is a disability called Dyscalculia which means that a person does not understand numbers, the person can learn that 7 + 3 = 10, but does not understand why. I have a relative who has this disability and to me it seems that people having this disability have poor reasoning abilities similar to current LLMs like GPT-4. They can learn many languages fluently, they can express their opinion on complex subjects, but they still have poor reasoning. My thinking is that, with the current LLMs we've already created the language center of the brain, but the mathematical center still needs to be created as that one will give the AI reasoning abilities (just like in people who don't have Dyscalculia)