Recent comments in /f/singularity
Lartnestpasdemain t1_jdqh70b wrote
Reply to comment by YaAbsolyutnoNikto in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Could be, you're right.
If that's the case, giving birth will be déclared a crime against humanity though, and most probably punished by death or exile to other planets. So wait and see 😌
RadioFreeAmerika OP t1_jdqgvof wrote
Reply to comment by turnip_burrito in Why is maths so hard for LLMs? by RadioFreeAmerika
There's something to it, but then they currently still fail at the simplest maths questions from time to time. So far, I didn't get a single LLM to correctly write me a sentence with eight words in it on first try. Most get it correct on the second try, though.
turnip_burrito t1_jdqgloh wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
GPT4 is actually really good at arithmetic.
Also these models are very capable at math and counting if you know how to correctly use them.
Akimbo333 t1_jdqg9nt wrote
Reply to comment by [deleted] in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Wow now that's intense!!!
flamegrandma666 t1_jdqfypr wrote
Reply to comment by beezlebub33 in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Its not just Christians, lots of other religions are eschatological. And the argument in the end does not hold water - the 2nd century gnostics would say exactly the same thing (they have the information and knowledge about the impending doom, as opposed to anyone else).
I've always had an issue with a notion of technological singularity, as its simply not how emergence works. Emergence (from systems theory) is a phenomenon built on a pre-ceding layer or substrate of reality. A/GI is emergent from computer networks and data, but it will not converge, or "lift" things from previous layers. I.e. go to Mongolia and hang out with villagers there. Ask them if they tbink we're about to hit singularity....
Surur t1_jdqfxw6 wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
I aksed ChatGPT:
Large language models, like GPT-4, are primarily designed for natural language processing tasks, such as understanding and generating human-like text. While these models can perform some mathematical operations, they have inherent limitations when it comes to solving complex math problems. Here are some reasons why large language models might struggle with math:
-
Limited training data: The training data for large language models primarily consists of text from the internet, which contains less math problems and solutions compared to natural language content. This means the models have fewer opportunities to learn mathematical reasoning and problem-solving skills.
-
Sequential processing: Language models read and process text in a linear, sequential manner, making it challenging for them to handle complex mathematical problems that require multi-step reasoning or manipulation of symbols and equations.
-
Inability to represent precise numerical values: The token-based nature of language models can make it difficult for them to represent and manipulate exact numerical values, especially when dealing with very large or very small numbers.
-
Lack of specialized mathematical knowledge: While large language models can learn general mathematical concepts, they lack the specialized knowledge and techniques required to solve advanced mathematical problems, such as those found in higher-level calculus or abstract algebra.
-
No built-in error-checking: Language models are designed to generate text that sounds fluent and coherent, but they do not have built-in mechanisms to verify the correctness of their mathematical solutions.
-
Inability to handle symbolic notation: Language models can struggle with the manipulation of mathematical symbols and expressions, which often requires a deeper understanding of mathematical structure and logic.
These limitations make large language models less suited for advanced mathematical tasks. However, they can still be useful for simple arithmetic, understanding math-related natural language queries, or providing approximate answers. For more complex math problems, specialized tools and software, such as computer algebra systems (CAS), are more appropriate.
I think 2 and 3 are the most significant.
Akimbo333 t1_jdqf7jp wrote
Reply to comment by scooby1st in Ai invention….. coming soon by Ishynethetruth
Yeah
beezlebub33 t1_jdqezp3 wrote
Reply to comment by flamegrandma666 in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
I recognize your point, and try to withhold my enthusiasm. But this really does feel different. I don't know if it will be the same sort of 'different' that the introduction of PCs, the internet, and cell phones caused, but it feels even more 'different' than that.
The world has changed a lot in the past 50 years, technologically, economically, environmentally, and socially. The widespread use of LLM AIs is going to change it again, the only question is how much and how fast. I think more and faster. And whatever comes next is going to change it even more and even faster.
(And, as someone else pointed out, the difference between us and Christians is that we have data and can make plots of what has changed)
No_Ninja3309_NoNoYes t1_jdqey8j wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
All living things produce negentropy. We create very specific thermodynamic states to exist. This goes against the second law of thermodynamics, meaning that other systems have to compensate as it were. But luckily the universe is mindbogglingly, super astoundingly vast, so that's no biggie.
beezlebub33 t1_jdqecpn wrote
Reply to comment by 3xplo in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Simply a variation of the lottery fallacy: Something unlikely but wonderful happened, therefore I must be special.
But logically, someone must eventually win, and it's just blind luck which one it is. You are a bundle of short cuts, rough approximations, and biases that just happens to do well in the world. Let's hope that our progeny can do better.
RadioFreeAmerika OP t1_jdqecil wrote
Reply to comment by 21_MushroomCupcakes in Why is maths so hard for LLMs? by RadioFreeAmerika
Yeah, but we can't be trained on all the maths books and all the texts including mathematical logic, and from there develop a model that let us do maths by predicting the next words/sign.
OsakaWilson t1_jdqe3wa wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Either luck or you believe that your birth during this time was something other than random chance.
"No, it's not luck..." is addressing what led this to happen now, not the chances of us being here.
This is not a coherent selection of answers.
YaAbsolyutnoNikto t1_jdqe2fb wrote
Reply to comment by Lartnestpasdemain in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Why are you talking in “future” humans? If the singularity happens, there’s no reason we won’t be those future humans thousands of years from now living those things.
OsakaWilson t1_jdqdwl2 wrote
Reply to comment by [deleted] in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
...2018. Let's spend the last few years swimming in Bitcoin.
OsakaWilson t1_jdqdp59 wrote
Reply to comment by [deleted] in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Not sure if I am up voting a funny parody, a reference to a sci-fi book I missed, or a religious nut case.
21_MushroomCupcakes t1_jdqdh4k wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
We're kinda language models and we're often bad with math, and they didn't grow up having to spear a gazelle.
RadioFreeAmerika t1_jdqd3cw wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Let's assume that our reality is an ancestor simulation. Maybe conducted by an artificial superintelligence. What would be the most interesting parts of history to simulate? Many would argue this to be the time up to the inception of the ASI.
flexaplext t1_jdqcvbp wrote
Reply to comment by whyohwhythis in The whole reality is just so bizzare when you really think about it. by aalluubbaa
I went into this in a comment reply to someone else. I reached the conclusion that it would likely be one with higher spatial dimensions. We have a low number of spatial dimensions in our universe which is why I think this would likely be true if we are indeed a simulation. The resource cost for running us a simulation would then not be as high for them.
But yeah, I went more into this in my other comment(s) in this thread.
OsakaWilson t1_jdqcu3k wrote
Reply to comment by EthanPrisonMike in What do you want to happen to humans? by Y3VkZGxl
Capitalism will no longer function to distribute wealth throughout society. Whatever emerges in it's vacuum will look more like socialism than anything else. We won't need to code it, it will be the socio-economic system that is compatible with the technology. The only alternative to varieties of socialism will be absolute Totalitarianism.
trancepx t1_jdqcpnb wrote
Reply to comment by Austinsmakingstuff in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Information, is a human cultural exchangeable tool symbol to represent other more human cultural exchange tool symbols, if information is compared to light, an image or "concept" is made of lots of other smaller photons or parts of the whole... Energy, has many forms, light, being one form we can perceive...
. ... Mirrors normally can clearly reflect the light bouncing off other mirrors, so information equivalent to one of the vessels we are accustom too, light, that image can be of course coherent or decoherent... Being out of grasp. The ability to guess what our eyes are seeing, and form coherent combined images, or other sensation, are all phenomenon which seem to emerge from our biology. All human phenomenon means what it means to us because of our own autodidactic conceptualization of it. We exist how we do on our own perceptual highways and hov lanes, and to try and conceive how concepts are conceived otherwise turns into an effective mechanism / phenomenology / epistemological arms race, or non issue depending on your beliefs. One things for certain, forms exist, and many concepts take on unique forms... But the differences go from linear divergence to exponential. There may very well be no absolute quantities or exact laws to math or physics. Or perhaps the very nature or floe of time or causality may be constantly amorphous for all we really know... For now, though keeping it simple seems be the least energy cost to fuzzing out the solutions to all these objectively arbitrary quandaries we may have... Sure it might be important how physics works here, but what if we find out time, reality, and its forms don't adhere to simple consistent constraints like we think they do. Things behave different in different conditions. They might almost always behave the same way in the same conditions, but change anything from motion, temperature, pressure, charge, surrounding composition, etc. Maybe if you travel far enough in one direction you arrive somewhere where reality operates different, good luck defining that, or even beginning to forge any of the language tools necessary to describe what it is that happens in such a change of location... Well, that would probably be dismissed because it attacks the collective hubris of man and our struggle to make sense of the world... Our net combined desire for understanding greatly outpaces our ability to do so at any given point, and might not be as static, or concrete as we would like it to be at times, as well as the opposite at other times.... Like seems to reflect like, but too much of the same thing... well you get the picture, or in that case, lack there of.. Discernable form?
flexaplext t1_jdqcnej wrote
Reply to comment by Just_Another_AI in The whole reality is just so bizzare when you really think about it. by aalluubbaa
There's got to be some benefit though that is worth the probable immense resources required to run such a simulation.
I have talked about all this more in my other comment here in response to this. But I came to the conclusion that it would likely need to come from a higher spatial dimensional reality, so the resource cost would not then be extreme. In that case, I concluded, we could very well indeed be a byproduct or something, or a source of simulated data / intelligence.
[deleted] t1_jdqcg62 wrote
[deleted]
flexaplext t1_jdqc5lf wrote
Reply to comment by Dolnen in The whole reality is just so bizzare when you really think about it. by aalluubbaa
That's what I wrote about, before it got deleted. Plus other things.
It's not a waste of time for me, because it's interesting to ponder. I never subscribed to simulation theory myself. Because it seems incredibly difficult to simulate a reality as vast as ours. And it would take incredible amounts of knowledge and power. I'm not sure it is ever a possibility for any beings. And I don't believe it will ever be possible for us. It is, however, kind of important to know whether it's possible, so this is where it does apply to us. Simply because such technology could obviously hold so much power and thus there would be an incentive to create it if it is possible.
But onto what I wrote: an upper reality could want to simulate their own reality in order to see how their future would play out, that could have huge value. It's the only thing that made sense to me to be worth the huge investment of resources. But then, in that instance, their reality would just be the same as ours, so what's to say that this isn't just the base reality? Even if the reality that simulated us was themselves simulated for the same reason, at some point up the chain, there has to be a base reality that's the same as ours. So what's saying we wouldn't just be it? Since we're admitting that such a base reality is possible to happen purely from the natural laws of physics in that base reality.
The really funny thing is that we wouldn't really consider this kind of simulation a possibility until we did the exact same thing ourselves. Ie we ourselves (or our AI) tried to make an exact replica simulation of our universe. However, funnily enough, there would be a huge incentive for humans not to do that. Because if we did that very thing, it would be like accepting that we are ourselves a simulation as that would become inevitable.
So there would strangely be pushback from doing it to begin with. The weirdest part being, if we're an exact simulation replica of an upper reality of humans, we can actually then control exactly what actions they take (as we'd be identical to them). So, by not making this replica simulation ourselves, we stop the possibility of them making one, to the point that it never happened to begin with. People won't want us to be a simulation for feelings of inadequacy and existential dread, so they wouldn't allow such a simulation to happen.
It is possible, I guess, that the AI could want to do it on it's own intuition though, if it does not care about such things and just wants to find out the truth about its own reality or in order to gain knowledge about it's own reality.
Such a simulation may not really help it gain any knowledge about it's future though. Because as soon as you get to the point in the simulation where you are at in your time (ie the point of running the replica simulation), so will the humans / AI in your simulation. As they would wind up doing the exact same thing as you. Thus, they would then react in turn to the outcome of their own simulation, who would react to the outcome of their own simulation, etc, etc in an infinite spiral down of Inception style identical replica simulations. You would then windup making the exact same decisions as your simulation of reality or getting into some loop pattern of reaction that doesn't actually help you at all because the reaction is tainted. At this point, you may as well just live out your own history and not create such a simulation to begin with.
The other possibility is that the replica simulation is instead run to learn about the history of their world. That would assume though that you don't necessarily need to already know your exact history in order to run such a simulation and also that it were possible to do this whilst also probing information from the simulation without affecting affecting the simulation and it's outcome through your probing. But given that not being the case, such a simulation would also come with its own quirks.
What are they going to do when the simulation reaches their own point in history of in turn creating their own simulation of their history? Just turn it off? As history is all that is important. But that would potentially then stop them, themselves, from not being the base reality as they would likely themselves be a simulation of an upper reality trying to find out about their own history just as they are planning to do.
But again, this wouldn't happen for exactly this reason. Because the stakes would be even higher here, whereby creating such a situation would immediately lead to the risk of our own simulation and our entire universe is potentially going to be about to be turned off by that upper reality that simulated us for the same reason of learning our/their own history.
However, even in this case, there would still need to be a base reality that wouldn't actually get destroyed and it would necessarily be either our own reality or the reality that is directly simulating us. It couldn't be any other as in this simulation we are making, we are making it turn off before any more simulations are created. Thus ending a potential infinite loop of simulations and leaving just the two: us or them.
That's just really fucked up and weird to think about. This idea can also essentially fall into all the pitfalls with time travel if you try to detect whether you are the base reality or not, as you are in essence making time travel by stimulating a replica of your own past.
There is one final possibility along the simulating history line though. What if we come to some inevitable, inescapable death of our world? Like the sun exploding. Then the reason humanity creates the simulation at that point is so they can be born again (in simulation form) and their simulations can live out their lives again in an identical wonderful life like they have enjoyed (likely a utopia at that point). But they need to simulate the entire history of the universe up to that point, so that's us now, in order for humanity to reach that inevitable point of utopia again and for them to exist again in replica simulation form.
In that instance, they would know they are likely a simulation themselves but they wouldn't care because they're about to die anyway and they want the simulated beings just to exist and enjoy life like they have done. Knowing in turn that we (their replica simulation) will get to the same point too and make the same decision to make our own simulation at that point in our future (their present). Thus, indeed, creating an infinite loop of simulations that never ends, leaving life on earth always recreated and relived and enjoyed. Even if it's always identical every time and they have no knowledge of it themselves, they just want the simulated beings to end up existing and enjoying a utopia as they have done themselves.
That is incredibly strange to think about, but it is potentially something that could happen if such technology is possible (which is still a big if in my opinion). Humans are weird like that and may consider doing it as they wouldn't be losing anything at that point. They're going to die anyway and they will have already worked out that they are themselves likely a simulation just repeating history but already accepted that as being true and are fine with that and just happy to experience the pleasure.
The weird thing still is that there would necessarily need to be a base reality up in the chain, but that reality would be 1 in an infinite number of simulations and they would never know themselves that they are the special 1 in an infinite genuine base reality that actually started it all.
Ro1t t1_jdqc2kl wrote
Reply to comment by Villad_rock in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
No it doesn't at all, that's just how it's happened for us. Equivalent to saying the only way to store heritable information is through DNA, only way to store energy is carbs and fat. We literally just don't know.
turnip_burrito t1_jdqhcoi wrote
Reply to comment by RadioFreeAmerika in Why is maths so hard for LLMs? by RadioFreeAmerika
I'd have trouble making a sentence with 8 words in one try too if you just made me blast words out of my mouth without letting me stop and think.
I don't think this is a weakness of the model, basically. Or if it is, then we also share it.
The key is if you think about how you as a person approach the problem of making a sentence with 8 words, you will see how to design a system where the model can do it too.