Recent comments in /f/singularity
RadioFreeAmerika OP t1_jdr6zub wrote
Reply to comment by No_Ninja3309_NoNoYes in Why is maths so hard for LLMs? by RadioFreeAmerika
Looking forward to neurosymbolic AI then.
GoldenRain t1_jdr6z8w wrote
Reply to comment by Kolinnor in Why is maths so hard for LLMs? by RadioFreeAmerika
Ah great, that's impressive!
alexiuss t1_jdr6xs7 wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
Answer from my Gpt3.5 assistant:
GPT-3's mathematical abilities are limited by its programming and training data. It may struggle with more complex mathematical concepts or equations that require advanced problem-solving skills.
Furthermore, GPT-3's mathematical output may be affected by the quality and accuracy of the input data. If the input data is incomplete or inaccurate, the output may also be incorrect.
In addition, GPT-3's mathematical output may also be affected by its training data. If the training data is biased or incomplete, the output may be skewed or incorrect.
Therefore, to answer your question, GPT-3 may not be the best tool for performing complex mathematical computations due to its limited programming and training data. However, it can still perform simple calculations and solve basic equations.
MassiveIndependence8 t1_jdr6u2t wrote
Reply to comment by zero_for_effort in Why is maths so hard for LLMs? by RadioFreeAmerika
It takes GPT the same amount time to do anything, and since it’s impossible to multiple say “18837678995747 x 29747778847678877” in the same amount of time as “2 x 2” due to the fact that it’s more complicated, we can confidently say that GPT will never be able to do math since it means that every hard problems out there is as easy as the easy ones.
0382815 t1_jdr6r9p wrote
Reply to comment by zero_for_effort in Why is maths so hard for LLMs? by RadioFreeAmerika
The ELI5 is that it doesn’t actually calculate anything when you ask it to solve an equation. It guesses what the answer looks like—probably some combination of numbers—based on what it saw in the training dataset.
Kolinnor t1_jdr6m19 wrote
Reply to comment by GoldenRain in Why is maths so hard for LLMs? by RadioFreeAmerika
- This is a special puzzle for you. Can you read it ? ! enihcam ,uoy rof drah oot si siht teb I .oot hcnerf ni ,yakO .tset a si sihT .seog ti woh ees dna tpmorp wen a ni ecnetnes regnol euqinu erom a htiw yrT .ti deirt I nehw krow tndid ,drieW
>Of course! The text appears to be reversed. Here's the corrected version:
>
>Weird, didn't work when I tried it. Try with a more unique longer sentence in a new prompt and see how it goes. This is a test. Okay, in french too. I bet this is too hard for you, machine! Can you read it?
It kinda fucked up at the end because it repeated "can you read it", but it got the reversed text correct
No_Ninja3309_NoNoYes t1_jdr6b85 wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
LLMs are statistical models whereas maths uses symbols. It's a different approach altogether. If we write an add function, you need two inputs a and b.
def add(a, b): return a + b
We see two symbols or variables a and b. Plus the add function, function definition and plus operation. Whereas LLMs see many tokens. A dozen perhaps. It's completely different from what a compiler/interpreter sees too. There's neurosymbolic AI which combines Deep Learning like in current LLMs with symbolic AI, but AFAIK it's not that good yet. Because I guess it's hard to mix both approaches.
scarlettforever t1_jdr5fyn wrote
Reply to comment by KingsleyZissou in The whole reality is just so bizzare when you really think about it. by aalluubbaa
Exactly. NEED to colonize or expand is a DNA existence strategy that humans unreasonably project onto AI. It's especially weird to project it onto ASI that will be smarter than all of humanity.
GoldenRain t1_jdr57k1 wrote
Reply to comment by Kolinnor in Why is maths so hard for LLMs? by RadioFreeAmerika
Weird, didnt work when I tried it. Try with a more unique longer sentence in a new prompt and see how it goes.
dronegoblin t1_jdr4bat wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
LLMs don’t actually have any logical capacity, they just have seen large amounts of text and can predict what logically comes next.
There is a lot of randomness to this, and even at a temperature of 0 (most consistent output) it will still say things differently sometimes. That’s ok though, you can answer the same question in many different ways with language
Math is not like writing. There is only one option as to what comes next for math. But between not actually being able to logically reason and only having the internet as examples for math, it’s going to treat math like language. Not all the math on the internet is done the same or even correct, so it’s just combining whatever it’s seen in a way that seems logical. It can’t count, only guess though.
RadioFreeAmerika OP t1_jdr46f0 wrote
Reply to comment by throwawaydthrowawayd in Why is maths so hard for LLMs? by RadioFreeAmerika
Very insightful! Seems like even without groundbreaking stuff, more efficient hardware will likely make the solutions you mentioned more feasible in the future.
Kolinnor t1_jdr412t wrote
Reply to comment by GoldenRain in Why is maths so hard for LLMs? by RadioFreeAmerika
I just copy-pasted this on GPT-4 and here's the answer :
​
>I am doing well, thank you! As an AI language model, I don't have feelings or emotions like humans do, but I'm here to help you with any questions or information you need. Please let me know how I can assist you.
- What was my sentence saying ?
​
>Your sentence appears to be written backwards. When reversed, it says, "How are you?" You were asking about my well-being.
_Sesire t1_jdr3f5h wrote
Another way to phrase this: @LEVIS is using AI generated people as a way to avoid hiring and paying models who aren’t white
RadioFreeAmerika OP t1_jdr3b6j wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
Thank you very much for your clarification! Do you know if it is possible to make a LLM with more space and greater complexity than O(1) or how it possibly could be added to GPT-4 with or without plug-ins?
EvilKatta t1_jdr3atm wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
Humans process language multi-modally. We don't just predicts the next word (although we do this as well), we also visualize. We decode language as images projected at an internal screen that we're not consciously aware of (read Louder Than Words by B. Bergen on that). We can imagine 2 as two objects, 3 as three, imagine all kinds of transformations and rotations of said objects and use all kinds of internal shortcuts to do arithmetic.
Or we can take a calculator and use that. It's another thing that language models lack, even though they're run on a "computer".
I believe when AIs will be given these capabilities, they will do math "out of the box" no problem.
GoldenRain t1_jdr38ub wrote
Reply to comment by maskedpaki in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Even Openai says LLM are unlikely to be the path to AGI.
RadioFreeAmerika OP t1_jdr2woq wrote
Reply to comment by inigid in Why is maths so hard for LLMs? by RadioFreeAmerika
On the one hand, while we read one Wikipedia page, the AI could train on all information on multiplication. On the other hand, yes, we might need a dataset for maths.
GoldenRain t1_jdr2unm wrote
Reply to comment by Kolinnor in Why is maths so hard for LLMs? by RadioFreeAmerika
>Also, can you name one specific simple task that GPT-4 cannot do, and let's see next month...
It cannot read even simple stuff in reverse, like "?uoy era woH"
Artanthos t1_jdr2a7b wrote
Reply to comment by DankestMage99 in How will you spend your time if/when AGI means you no longer have to work for a living (but you still have your basic needs met such as housing, food etc..)? by DreaminDemon177
The more feasible parts of Star Trek are already here, only better.
The Hollywood Magic parts will remain Hollywood magic.
No amount of AI is going to change physics and start creating something from nothing.
RadioFreeAmerika OP t1_jdr25uz wrote
Reply to comment by FoniksMunkee in Why is maths so hard for LLMs? by RadioFreeAmerika
So plugins I guess? Or completely integrating another model?
AGI_69 t1_jdr20u8 wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
There is another take:
We are the last people, because AGI will kill us all.
The most amount of people alive at any given point is now. Statistically, you are most likely exist at the peak of human civilization.
Redditing-Dutchman t1_jdr1rsl wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
These models also have a random function thats why it gives a slighly different answer even if you ask the same question again. With text this is ok-ish but with math you need to be precise.
Even then it might get common stuff right but i can easily think of a sum that in the history of the internet has never been said before: 568753334668864468000 + 7654778875 + 433367886554.
AGI_69 t1_jdr1n6a wrote
Reply to comment by [deleted] in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Okay, with this post, the sub finally reached the bottom.
[deleted] t1_jdr0ux6 wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
[removed]
DerTaeter t1_jdr712i wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Someone has to be the one to experience it, simple as that.
How lucky can a Person be considered, that was born just after WW II in the West, will die before the singularity, but still had a FAR better life on average than 99% of humans before him.