Recent comments in /f/singularity

FomalhautCalliclea t1_jdqmjjs wrote

I answered "other".

The thing is that your reasoning is thwarted by the very form of the question: luck.

Luck is about probability. And in order to assess probability, one has to possess a data set consequent enough to make comparisons and try to detect patterns (from which we can predict).

But the issue with the question at hand is that we have a data set of 1 (one) sample: us. We don't have anything to compare it with. It's like having a deck of card, drawing one randomly, and wondering about the odds after the picking and without knowing anything about the other cards, while wondering what are the odds of having picked that card after you picked it.

It's the main problem behind teleological reasoning (reasoning on goals and ends of things): it has confirmation bias from what you already experienced projected on things you haven't and trying to find patterns in the unknown. It's not hard to guess why this could go wrong.

As for luck, here's a chinese story illustrating the limits of the concept:

A farmer has his only horse flee in the wilderness. His neighbours tell him "oh my, you're really unlucky, this horse was so useful to your work, this is a bad thing!". He answers "Maybe".

The following day, the horse comes back with 5 wild horses. Neighbours say "wow, you're so lucky, you won 5 free horses, this is a good thing!". He answers "Maybe".

The following day, his son tries to tame one of the wild horses, falls and breaks his leg. Neighbours: "oh my, this is really unlucky, your son was such a huge help at the farm, this is a bad thing!". He answers "Maybe".

The following day, war is declared. The king is mobilizing forcefully every young man able to fight. Militaries see the farmer's son and decide not to pick him up because of his broken leg. Neighbours: "wow, you're so lucky, your son won't die in war, this is a good thing!". He answers "Maybe".

Morality: reasoning about unknowns and their consequences on our lives and very subjective and limited desires is often meaningless.

1

AdaptivePerfection t1_jdqmij8 wrote

Anyone know what would happen if we modified our brains to increase its intelligence instead of making an ASI? Would we emerge with new emotions or comprehension of the universe? Reminds me of the ants next to the highway metaphor. Why not utilize AI to increase our understanding of our biology so we can make each individual the equivalent of an ASI in intelligence?

4

ecnecn t1_jdqlr0w wrote

They need to design a Large Arithmetical Symbol Model where is predicts the next combination of arithmetical operators then LLM and LASM could coexist. Just lke GPT 4.0 and WolframAlpha

46

datsmamail12 t1_jdqlmzf wrote

Everyone is suddenly talking about sparks of AGI,even if we do or don't have it yet doesn't matter. What matters is that we are one step behind achieving it,which is a crazy thing to think of. Some people were so bold on their statements that we might never get to have AGI that were willing to even bet money on it. But here we are in 2023 hearing from different people that AGI is near. Incredible times!

1

plateauphase t1_jdqlimg wrote

yeah, it's kind of impossible if not absurdly difficult and jarringly unintuitive under physicalist assumptions. fortunately, the scientific theories are metaphysically neutral, so it's open for alternative interpretations, such as analytic idealism!

basically, physicality is the appearance of mental processes from across the private conscious pov. like the dashboard of dials on a plane, which definitely display measurements of an external world, physical properties represent the external world, which is not physical, but mental. mental just means of the same kind that consciousness is, which is all we ever directly know.

this doesn't explain mind in terms of an other existent, but takes mind as the reductive base, exactly like physicalism doesn't explain 'the physical', but takes that as the reductive base. however, while 'physicality' is a perfectly transcendental, non-mental existent, which cannot be experienced and is a metaphysical postulate, not an empirical observation, consciousness, mental processes, experientiality is the only given of nature which we directly and most intimately know.

2

RadioFreeAmerika OP t1_jdqky02 wrote

Ah, okay, thanks. I have to look more into this vector-number representation.

For the chatbot thing, why can't the LLM generate a non-displayed output, "test it", and try again until it is confident it is right and only then display it? Ideally, with a time counter that at some point lets it just display what it has with a qualifier. Or if the confidence still is very low, just state that it doesn't know.

2

Surur t1_jdqjdyr wrote

I would add one issue is that transformers are not turing complete, so they can not perform an arbitrary calculation of arbitrary length. However recurrent neural networks, which loop, are, so it is not a fundamental issue.

Also there are ways to make transformers turing complete.

3

HumpyMagoo t1_jdqj3cl wrote

GPT4, make a better version of yourself. GPT4, after you make a better version of yourself I will hardwire multiple machines together and also have Virtual Machines and other devices linked, All of you GPT4s work together using combined computing power of all devices to make a better overall version while also upgrading yourselves and recruit more devices through bots online and merge.

1

RadioFreeAmerika OP t1_jdqix38 wrote

Thanks! I will play around with maths questions solely expressed in language. What I wonder however is not the complex questions, but the simple ones for which incorrect replies are quite common, too.

From the response it seems that, while some probless are inherent to LLMs, most can and will most probably be adressed in future releases.

Number 1 just needs more mathematical data in the training data.

Number 2 could be addressed by processing the output a second time before prompting, or alternatively running it through another plugin. Ideally, the processed sequence length would be increased. Non-linear sequence processing might also be an option, but I have no insights into that.

Number 3 shouldn't be a problem for most everyday maths problems, depending on the definition of precise. Just cut off after two decimal places, e.g. . For maths that is useful in professional settings, it will, though.

Number 4 gets into the hard stuff. I have nothing to offer here besides using more specialized plugins.

Number 5 can easily be addressed. Even without plugins, it can identify and fix code errors (at least sometimes in my experience). This seems kinda similar to errors in "mathematical code"

Number 6 is a bit strange to me. Just translate the symbolic notation into the internal working language of an LLM, "solve" it in natural language space, and retranslate it into symbolic notation space. Otherwise, use image recognition. If GPT4 could recognize that a VGA plug doesn't fit into a smartphone and regarded this as a joke, it should be able to identify meaning in symbolic notation.

Besides all that, now I want a "childlike" AI that I can train until it has "grown up" and the student becomes the master and can help me to better understand things.

2

throwawaydthrowawayd t1_jdqisag wrote

Remember, the text of an LLM is literally the thought process of the LLM. Trying to have it instantly write an answer to what you ask makes it nigh impossible to accomplish the task. Microsoft and OpenAI have said that the chatbot format degrades the AI's intelligence, but it's the format that is the most useful/profitable currently. If a human were to try to write a sentence with 8 words, they'd mentally retry multiple times, counting over and over, before finally saying an 8 word sentence. By using a chat format, the AI can't do this.

ALSO, the AI does not speak English. It gets handing a bunch of vectors, which do not directly correspond to word count, and it thinks about those vectors, before handing back a number. The fact these vectors + a number directly translate into human language doesn't mean it's going to have an easy time figuring out how many vectors add up to 8 words. That's just a really hard task for LLMs to learn.

9

EchoingSimplicity t1_jdqhrjf wrote

That's a lot of assumptions, any one of which could turn out to be very shaky. I'm pretty sure 'information' in the context of physics means something very different from how we're using it, and pretty much amounts to fancy math variables used as suppositions to test (also fancy math) hypotheses.

Whatever, maybe I'm wrong.

2

EchoingSimplicity t1_jdqhgb6 wrote

I'll just throw some hypotheses out instead of committing to any one unknowable position:

1.) We're in a computer simulation

2.) We're all just God having fun, so we placed ourselves at this specific time because it was particularly interesting

3.) This is base reality, it's just that the numbers work out so that being alive at this particular time isn't all that unlikely. One hundred billion humans have lived and died across history, so there's only a ten percent chance (give or take) of being alive around this time.

4.) It's actually 10,000 B.C. You hit your head. This is all just a fancy hallucination. Grug is starting to get very worried.

5.) Reality is a dream manifestation of the will of that consciousness which precedes material existence. Everything you're experiencing was somehow willed into being by your soul and when you die you'll just fabricate another existence that gets you off.

6.) It doesn't matter. You're here now and there's (evidently) nothing that will change that in the immediate moment nor do you have control over it. Just go jerk off and touch some grass. Maybe do both at the same time if it doesn't count as public indecency

33