Recent comments in /f/singularity
FomalhautCalliclea t1_jdqmtmy wrote
Reply to comment by [deleted] in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
I'm conflicted about your post.
On the one hand i like your tag and especially its ending point.
On the other hand, i don't like your written conclusion since i would have expected the apogee of mankind not to be a celestial kim jong un.
EuroCultAV t1_jdqmt16 wrote
Reply to comment by Shiningc in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Exactly
We cannot assume this will happen in our life times.
Chat GPT is very interesting though
FomalhautCalliclea t1_jdqmjjs wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
I answered "other".
The thing is that your reasoning is thwarted by the very form of the question: luck.
Luck is about probability. And in order to assess probability, one has to possess a data set consequent enough to make comparisons and try to detect patterns (from which we can predict).
But the issue with the question at hand is that we have a data set of 1 (one) sample: us. We don't have anything to compare it with. It's like having a deck of card, drawing one randomly, and wondering about the odds after the picking and without knowing anything about the other cards, while wondering what are the odds of having picked that card after you picked it.
It's the main problem behind teleological reasoning (reasoning on goals and ends of things): it has confirmation bias from what you already experienced projected on things you haven't and trying to find patterns in the unknown. It's not hard to guess why this could go wrong.
As for luck, here's a chinese story illustrating the limits of the concept:
A farmer has his only horse flee in the wilderness. His neighbours tell him "oh my, you're really unlucky, this horse was so useful to your work, this is a bad thing!". He answers "Maybe".
The following day, the horse comes back with 5 wild horses. Neighbours say "wow, you're so lucky, you won 5 free horses, this is a good thing!". He answers "Maybe".
The following day, his son tries to tame one of the wild horses, falls and breaks his leg. Neighbours: "oh my, this is really unlucky, your son was such a huge help at the farm, this is a bad thing!". He answers "Maybe".
The following day, war is declared. The king is mobilizing forcefully every young man able to fight. Militaries see the farmer's son and decide not to pick him up because of his broken leg. Neighbours: "wow, you're so lucky, your son won't die in war, this is a good thing!". He answers "Maybe".
Morality: reasoning about unknowns and their consequences on our lives and very subjective and limited desires is often meaningless.
AdaptivePerfection t1_jdqmij8 wrote
Anyone know what would happen if we modified our brains to increase its intelligence instead of making an ASI? Would we emerge with new emotions or comprehension of the universe? Reminds me of the ants next to the highway metaphor. Why not utilize AI to increase our understanding of our biology so we can make each individual the equivalent of an ASI in intelligence?
[deleted] t1_jdqmcqb wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
[removed]
plateauphase t1_jdqmcky wrote
Reply to comment by Professional-Let9470 in The whole reality is just so bizzare when you really think about it. by aalluubbaa
mm, sheer logical conceivability doesn't convince me. why is it not the flying spaghetti monster with its noodly appendages, and why not an infinite regression? why not thousands of other logically possible scenarios?
agonypants t1_jdqmaxu wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
OP, I highly recommend watching A Trip To Infinity on Netflix. While the subject is not directly related to AI, the questions you raise are similar to those in the documentary.
ecnecn t1_jdqlr0w wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
They need to design a Large Arithmetical Symbol Model where is predicts the next combination of arithmetical operators then LLM and LASM could coexist. Just lke GPT 4.0 and WolframAlpha
datsmamail12 t1_jdqlmzf wrote
Everyone is suddenly talking about sparks of AGI,even if we do or don't have it yet doesn't matter. What matters is that we are one step behind achieving it,which is a crazy thing to think of. Some people were so bold on their statements that we might never get to have AGI that were willing to even bet money on it. But here we are in 2023 hearing from different people that AGI is near. Incredible times!
plateauphase t1_jdqlimg wrote
Reply to comment by AnOnlineHandle in The whole reality is just so bizzare when you really think about it. by aalluubbaa
yeah, it's kind of impossible if not absurdly difficult and jarringly unintuitive under physicalist assumptions. fortunately, the scientific theories are metaphysically neutral, so it's open for alternative interpretations, such as analytic idealism!
basically, physicality is the appearance of mental processes from across the private conscious pov. like the dashboard of dials on a plane, which definitely display measurements of an external world, physical properties represent the external world, which is not physical, but mental. mental just means of the same kind that consciousness is, which is all we ever directly know.
this doesn't explain mind in terms of an other existent, but takes mind as the reductive base, exactly like physicalism doesn't explain 'the physical', but takes that as the reductive base. however, while 'physicality' is a perfectly transcendental, non-mental existent, which cannot be experienced and is a metaphysical postulate, not an empirical observation, consciousness, mental processes, experientiality is the only given of nature which we directly and most intimately know.
DixonJames t1_jdqlhou wrote
Reply to comment by acutelychronicpanic in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
yes. key to chat GPT is that it can't improve itself - the engineers need to release a new version. When we have an evolving AI, that's when we are really running fast.
RadioFreeAmerika OP t1_jdqlcsd wrote
Reply to comment by turnip_burrito in Why is maths so hard for LLMs? by RadioFreeAmerika
I also don't think it is a weakness of the model, just a current limitation I didn't expect from my quite limited knowledge about LLMs. I am trying to gain some more insights.
RadioFreeAmerika OP t1_jdqky02 wrote
Reply to comment by throwawaydthrowawayd in Why is maths so hard for LLMs? by RadioFreeAmerika
Ah, okay, thanks. I have to look more into this vector-number representation.
For the chatbot thing, why can't the LLM generate a non-displayed output, "test it", and try again until it is confident it is right and only then display it? Ideally, with a time counter that at some point lets it just display what it has with a qualifier. Or if the confidence still is very low, just state that it doesn't know.
imlaggingsobad t1_jdqkq50 wrote
this was a very long winded way of saying 'simulation hypothesis'. No you're not crazy, it's pretty popular.
Surur t1_jdqjdyr wrote
Reply to comment by RadioFreeAmerika in Why is maths so hard for LLMs? by RadioFreeAmerika
I would add one issue is that transformers are not turing complete, so they can not perform an arbitrary calculation of arbitrary length. However recurrent neural networks, which loop, are, so it is not a fundamental issue.
Also there are ways to make transformers turing complete.
TheMadGraveWoman t1_jdqjbnb wrote
Reply to comment by RadioFreeAmerika in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
The said ASI must have an ego.
I hope I won't be punished for saying that.
HumpyMagoo t1_jdqj3cl wrote
GPT4, make a better version of yourself. GPT4, after you make a better version of yourself I will hardwire multiple machines together and also have Virtual Machines and other devices linked, All of you GPT4s work together using combined computing power of all devices to make a better overall version while also upgrading yourselves and recruit more devices through bots online and merge.
gay_manta_ray t1_jdqiyum wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
i don't think luck has anything to do with it. can't explain, just a very illogical feeling.
RadioFreeAmerika OP t1_jdqix38 wrote
Reply to comment by Surur in Why is maths so hard for LLMs? by RadioFreeAmerika
Thanks! I will play around with maths questions solely expressed in language. What I wonder however is not the complex questions, but the simple ones for which incorrect replies are quite common, too.
From the response it seems that, while some probless are inherent to LLMs, most can and will most probably be adressed in future releases.
Number 1 just needs more mathematical data in the training data.
Number 2 could be addressed by processing the output a second time before prompting, or alternatively running it through another plugin. Ideally, the processed sequence length would be increased. Non-linear sequence processing might also be an option, but I have no insights into that.
Number 3 shouldn't be a problem for most everyday maths problems, depending on the definition of precise. Just cut off after two decimal places, e.g. . For maths that is useful in professional settings, it will, though.
Number 4 gets into the hard stuff. I have nothing to offer here besides using more specialized plugins.
Number 5 can easily be addressed. Even without plugins, it can identify and fix code errors (at least sometimes in my experience). This seems kinda similar to errors in "mathematical code"
Number 6 is a bit strange to me. Just translate the symbolic notation into the internal working language of an LLM, "solve" it in natural language space, and retranslate it into symbolic notation space. Otherwise, use image recognition. If GPT4 could recognize that a VGA plug doesn't fit into a smartphone and regarded this as a joke, it should be able to identify meaning in symbolic notation.
Besides all that, now I want a "childlike" AI that I can train until it has "grown up" and the student becomes the master and can help me to better understand things.
throwawaydthrowawayd t1_jdqisag wrote
Reply to comment by RadioFreeAmerika in Why is maths so hard for LLMs? by RadioFreeAmerika
Remember, the text of an LLM is literally the thought process of the LLM. Trying to have it instantly write an answer to what you ask makes it nigh impossible to accomplish the task. Microsoft and OpenAI have said that the chatbot format degrades the AI's intelligence, but it's the format that is the most useful/profitable currently. If a human were to try to write a sentence with 8 words, they'd mentally retry multiple times, counting over and over, before finally saying an 8 word sentence. By using a chat format, the AI can't do this.
ALSO, the AI does not speak English. It gets handing a bunch of vectors, which do not directly correspond to word count, and it thinks about those vectors, before handing back a number. The fact these vectors + a number directly translate into human language doesn't mean it's going to have an easy time figuring out how many vectors add up to 8 words. That's just a really hard task for LLMs to learn.
phillythompson t1_jdqib4i wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
“I wish it need not have happened in my time," said Frodo.
"So do I," said Gandalf, "and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.”
EchoingSimplicity t1_jdqhrjf wrote
Reply to comment by [deleted] in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
That's a lot of assumptions, any one of which could turn out to be very shaky. I'm pretty sure 'information' in the context of physics means something very different from how we're using it, and pretty much amounts to fancy math variables used as suppositions to test (also fancy math) hypotheses.
Whatever, maybe I'm wrong.
hypnomancy t1_jdqhlrw wrote
Reply to comment by dex3r in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
If we're all able to live forever thanks to AGI and ASI making us immortal or living for hundreds or thousands of years we honestly don't even have to worry about reproducing anymore
EchoingSimplicity t1_jdqhgb6 wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
I'll just throw some hypotheses out instead of committing to any one unknowable position:
1.) We're in a computer simulation
2.) We're all just God having fun, so we placed ourselves at this specific time because it was particularly interesting
3.) This is base reality, it's just that the numbers work out so that being alive at this particular time isn't all that unlikely. One hundred billion humans have lived and died across history, so there's only a ten percent chance (give or take) of being alive around this time.
4.) It's actually 10,000 B.C. You hit your head. This is all just a fancy hallucination. Grug is starting to get very worried.
5.) Reality is a dream manifestation of the will of that consciousness which precedes material existence. Everything you're experiencing was somehow willed into being by your soul and when you die you'll just fabricate another existence that gets you off.
6.) It doesn't matter. You're here now and there's (evidently) nothing that will change that in the immediate moment nor do you have control over it. Just go jerk off and touch some grass. Maybe do both at the same time if it doesn't count as public indecency
StopLookListenNow t1_jdqn0it wrote
Reply to Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Let all models be genderless greys, their faces blurred out, unrecognizable. One size, one alien, fits all. All shall wear flowing robes that blur out our body shape. ~s