Recent comments in /f/singularity

Ytumith t1_jdpx0si wrote

Perhaps they process their code and kill everything, then keep building defunct rockets and crashing into things, or perhaps build a capacitor and charge it with electricity until everything explodes. Perhaps it can't wrap it's head around solar collectors and just runs out of energy?

Worst case scenario: AI is actually not sentient at all and without supervision creates errors that dismantle it after a while due to inability to sustain itself.

2

FoniksMunkee t1_jdputbl wrote

Even MS are speculating that LLM alone are not going to solve some of the problems they see with ChatGPT's ability to reason. ChatGPT has no ability to plan, or to solve problems that require a leap of logic. Or as they put it, the slow thinking process that overseas the fast thinking process. They have acknowledge solutions proposed by other authors that have recognised the same issue with LLM's have suggested a different architecture may be required. But this seemed to be the least fleshed out part of the paper.

5

Dolnen t1_jdpujz3 wrote

You waste too much time trying to explain this reality with a simulation theory. You know why? Because even if we assume there are higher dimensions with "living beings", we would still need to explain their existence and thus we end up in the same place. They would be asking the same questions about their reality as us. So what is the ultimate reality? What is the origin of everything? What is everything? It is an endless, paradoxical loop that has no answer. That's where the existencial crisis kicks in.

6

No_Ninja3309_NoNoYes t1_jdprlpp wrote

80% sounds like a wild stab. I second that current systems are not original. Sure, they can stumble on something unique, but anyone can if they try hard enough. And computers can combine items faster than we can. Some of the combinations might be meaningful, but AI doesn't really know because they have no model of the world.

I don't think we can say much about GPT 4 because OpenAI is secretive about it. But it can't be AGI unless OpenAI invented something extraordinary. If they did, they would be fools to expose it to the world just like that.

It looks like he's talking about neurosymbolic systems or RNNs. IMO we need spiking neural networks hardware. The architecture would probably be something novel that we don't even have a name for yet.

1

Arcady t1_jdpn3lz wrote

Well as you mention the light speed has a limit and when you are observing the universe due to that limit you are observing just a very limited portion of it and also very old, since to “see” something we need light.

And we know is very old because when we watch the furthest possible away we are literally seeing the start of the universe happening right now

So, who knows if the universe is plagued of civilizations or AI, the reason we don’t see them is because we are seeing everything around us with a massive delay

Also the universe can be even bigger as we consider, after all we only can see the observable part of it which is regulated by the light speed. It could be perfectly be that is so much big that we are just a very small point on it that perhaps is just placed in a very empty and boring region

For last consider this: we live in the same world than many other biological species, so much in the same place than we often clash with them (we have them in in our homes in our gardens…). But even having them so much present, when is the last time you have considered an ant or a squirrel relevant at all for your life? Even having them so near? You didn’t, because intellectually they mean very little for you, that much that you don’t even think on them.

So if there are extremely advanced and intelligent species out there who are much more intelligent than we are compared to our ants and squirrels, with the gap of the exponential growing that Ai would have, we would be nothing else than more ants and more squirrels. For them and our ants and our squirrels

7

Just_Another_AI t1_jdpk6m0 wrote

My response to the question "why would we create a simulation like this?" is what if this is a simulation but not a simulation which was purposely created? When AI goes through a generative process to design something like a more efficient antenna, for example, it creates and tests thousands of permutations, gleaning insights from each generation to factor into the next iteration. What if we're just an iteration in a massive volley of simulations? What if we're just some backwater spin-off glitch that hasn't even been detected as the simulation isn't even about us? So many possibilities

4