Recent comments in /f/singularity

alexiuss t1_jdr6xs7 wrote

Answer from my Gpt3.5 assistant:

GPT-3's mathematical abilities are limited by its programming and training data. It may struggle with more complex mathematical concepts or equations that require advanced problem-solving skills.

Furthermore, GPT-3's mathematical output may be affected by the quality and accuracy of the input data. If the input data is incomplete or inaccurate, the output may also be incorrect.

In addition, GPT-3's mathematical output may also be affected by its training data. If the training data is biased or incomplete, the output may be skewed or incorrect.

Therefore, to answer your question, GPT-3 may not be the best tool for performing complex mathematical computations due to its limited programming and training data. However, it can still perform simple calculations and solve basic equations.

1

MassiveIndependence8 t1_jdr6u2t wrote

It takes GPT the same amount time to do anything, and since it’s impossible to multiple say “18837678995747 x 29747778847678877” in the same amount of time as “2 x 2” due to the fact that it’s more complicated, we can confidently say that GPT will never be able to do math since it means that every hard problems out there is as easy as the easy ones.

10

Kolinnor t1_jdr6m19 wrote

- This is a special puzzle for you. Can you read it ? ! enihcam ,uoy rof drah oot si siht teb I .oot hcnerf ni ,yakO .tset a si sihT .seog ti woh ees dna tpmorp wen a ni ecnetnes regnol euqinu erom a htiw yrT .ti deirt I nehw krow tndid ,drieW

>Of course! The text appears to be reversed. Here's the corrected version:
>
>Weird, didn't work when I tried it. Try with a more unique longer sentence in a new prompt and see how it goes. This is a test. Okay, in french too. I bet this is too hard for you, machine! Can you read it?

It kinda fucked up at the end because it repeated "can you read it", but it got the reversed text correct

15

No_Ninja3309_NoNoYes t1_jdr6b85 wrote

LLMs are statistical models whereas maths uses symbols. It's a different approach altogether. If we write an add function, you need two inputs a and b.

def add(a, b): return a + b

We see two symbols or variables a and b. Plus the add function, function definition and plus operation. Whereas LLMs see many tokens. A dozen perhaps. It's completely different from what a compiler/interpreter sees too. There's neurosymbolic AI which combines Deep Learning like in current LLMs with symbolic AI, but AFAIK it's not that good yet. Because I guess it's hard to mix both approaches.

2

dronegoblin t1_jdr4bat wrote

LLMs don’t actually have any logical capacity, they just have seen large amounts of text and can predict what logically comes next.

There is a lot of randomness to this, and even at a temperature of 0 (most consistent output) it will still say things differently sometimes. That’s ok though, you can answer the same question in many different ways with language

Math is not like writing. There is only one option as to what comes next for math. But between not actually being able to logically reason and only having the internet as examples for math, it’s going to treat math like language. Not all the math on the internet is done the same or even correct, so it’s just combining whatever it’s seen in a way that seems logical. It can’t count, only guess though.

1

Kolinnor t1_jdr412t wrote

I just copy-pasted this on GPT-4 and here's the answer :

​

>I am doing well, thank you! As an AI language model, I don't have feelings or emotions like humans do, but I'm here to help you with any questions or information you need. Please let me know how I can assist you.

- What was my sentence saying ?

​

>Your sentence appears to be written backwards. When reversed, it says, "How are you?" You were asking about my well-being.

18

EvilKatta t1_jdr3atm wrote

Humans process language multi-modally. We don't just predicts the next word (although we do this as well), we also visualize. We decode language as images projected at an internal screen that we're not consciously aware of (read Louder Than Words by B. Bergen on that). We can imagine 2 as two objects, 3 as three, imagine all kinds of transformations and rotations of said objects and use all kinds of internal shortcuts to do arithmetic.

Or we can take a calculator and use that. It's another thing that language models lack, even though they're run on a "computer".

I believe when AIs will be given these capabilities, they will do math "out of the box" no problem.

6

Redditing-Dutchman t1_jdr1rsl wrote

These models also have a random function thats why it gives a slighly different answer even if you ask the same question again. With text this is ok-ish but with math you need to be precise.

Even then it might get common stuff right but i can easily think of a sum that in the history of the internet has never been said before: 568753334668864468000 + 7654778875 + 433367886554.

2