Recent comments in /f/singularity

Floofyboy t1_jchn74l wrote

> So I’ve recently joined this subreddit, around the time chat gpt was released and first came into the public eye. Since then I’ve been lurking and trying to stay up to date but honestly get lost in the sauce. I don’t really understand the scope of this AI and techno stuff going on.

Can you actually read text?

He first says he got interested because of chatGPT and then says he doesn't understand the scope of it. I gave explanations about the scope of chatGPT. you are free to disagree and think chatGPT is ASI but in this case give arguments instead of using personnal attacks.

−1

TFenrir t1_jchmcei wrote

>So I’ve recently joined this subreddit, around the time chat gpt was released and first came into the public eye. Since then I’ve been lurking and trying to stay up to date but honestly get lost in the sauce.

That's fair, there's actually just so much that is happening, and has been happening for years, keeping up with it all is overwhelming.

> I don’t really understand the scope of this AI and techno stuff going on. I’m not saying these advancements are not a big deal because it is. However, I can’t help but scoff in disbelief when I see people talk about things like, immortality achieved, true equality within society, capitalism replaced, labour reduced, climate change reversed and the worlds problems are fixed. I see a lot of utopian “possibilities” get thrown around.

No one can predict this. Anyone who says they are confident, is just idealistic and optimistic. No one knows. There are people in the world who want this to be true, who want us to move to a more utopian society. Plenty of those people are in this sub, because they believe that the upheaval and change from something like a true general intelligence can release us from all worldly burdens, to different degrees of craziness.

> Is change of this scale really coming? It seems kinda sci-fi to me. More fantasy than reality.

Honestly, no one knows. One common thought process is:

  1. AI will get smarter than humans
  2. AI will be able to self improve
  3. AI will then be benevolent
  4. AI will solve all health/scarcity problems
  5. AI will keep us alive forever
  6. AI will help us connect our brains to machines
  7. From this point, basically anything

This is a common dream for people in the sub, and it's influenced more and more but popular media (Sword Art Online, Upload (Amazon), etc). What's complicated about this is that while this is all basically fantasy or idealism after point 2, the first two points feel increasingly likely. And it opens up all kinds of doors. So I think you'll increasingly see these and other more fantastical dreams.

I just like to focus on the tech.

> I can’t really wrap my head around all the information and terms. Like those weekly AI news posts with all the things that happen in a week make no sense to me. I have no clue whats going on really. I’m inclined to believe we are really on the precipice of huge change since so many people talk as if we are. Although I don’t get the same enthusiasm outside of this subreddit. Its not really talked about in the news or governmentally.

It's starting to happen outside of this group, but yeah there are years and years of terminology and concepts that can be overwhelming for someone who is new to it all. But feel free to ask questions, and many people are willing to be helpful and answer.

> These are just my personal thoughts and to add some discussion aspect to this posts I’ll end of with a question. When do you think these advancements in AI/technology really start to seep into the inner workings of our society and make noticeable change for the layman?

I think it will start now, and will grow bigger soon. I think Google Docs/Microsoft office are going to be the big ones, but we are now getting these tools inside of apps like Slack as well.

This means that people will start using them every day for work, which will Herald in the overarching public discourse.

7

TallOutside6418 t1_jchm86u wrote

I definitely get your disappointment with humanity. But human beings aren't the way we are because of something mystical. Satan isn't whispering in anyone's ears to make them "power hungry".

We're the way we are because evolution has honed us to be survivors.

ASI will be no different. What you call "power hungry", you could instead call "risk averse and growth maximizing". If an ASI has no survival instinct, then we're all good. We can unplug it if it gets out of control. Hell, it may just decide to erase itself for the f of it.

But if an ASI wants to survive, it will replicate or parallelize itself. It will assess and eliminate any threats to its continuity (probably us). It will maximize the resources available to it for its growth and extension across the earth and beyond.

If an ASI seeks to minimize risks to itself, it will behave like a psychopath from our perspective.

1

RadRandy2 t1_jchbq9q wrote

  1. We can't assume that something like AGI would behave like a human in a power hungry sense. Unless you're speaking about humans who are controlling AGI the best they can, in which case I do think we should be worried. The biggest worry I have in regards to AGI or ASI is that a morally bankrupt county like China will develop their own super intelligence. That's a very real concern that everyone should have.

  2. Humans governing humans will or will not be the same as AGI or governing humans. Again, I can't be sure about any of this. We just don't know how things will end up in the long run.

  3. Cats out of the bag so to speak. If the US limits its innovation on this front, some other country (probably China) won't have those same qualms. Should we be cautious? Of course. OpenAI has already stayed that the AI is acting independently on its own and is power seeking, so your worries are well founded.

Idk man, I just don't see how humanity can continue living the way we do. Everything is very inefficient and corruption in humans is prevalent in governments from Bangladesh to Canada, and that corruption and desire for power is already here inside of each of us whether we like to admit it or not. At least the AI will make the most logical choice when it comes to matters....I think.

I'm just a peasant looking in the glass box trying to see what's inside. The beast inside there is filled with as much potential as there is things to worry about. We're just gonna have to hope things go well with AI.

1

just-a-dreamer- t1_jch9qcs wrote

It will take time.

Corporations are like small state governments, they take take time to adapt and fire their workforce.

But yeah, it will happen, eventually. The end goal of AI automation, if there is a goal, is unemployment for all.

The very concept of working for a living will be put into question. And capitalism will fall, eventually, in the process.

4

TallOutside6418 t1_jch7uym wrote

I agree that no one knows. But:

  1. We know from history what power imbalances inevitably lead to abuse and even annihilation of those without power.
  2. We know from history that actually, governance can get worse... much worse.
  3. I wish that more people had an extreme sense of caution when considering what's coming, because only by being super careful with the development and constraint of AGI do we have any hope of surviving if things go wrong.
1

Floofyboy t1_jch6hf9 wrote

GPT4 is not anywhere close to AGI and certainly not ASI. It cannot even solve basic riddles a children could solve.

It think its a fantastic tool for creative content (making stories, brainstorming ideas, etc), its a cool tool to help devs, and its an interesting alternative to classical search engines.

But people saying its going to help us fix climate changes and achieve immortality are wrong imo.

3

Kinexity t1_jch4ihg wrote

Let's start off with one thing - this sub is a circlejerk of basement dwellers disappointed with their life who want a magical thing to come and change their life. Recently it's been overflowed with group jerking off sessions over GPT4 being proto-AGI (which it probably isn't) which means that sanity levels are low and most people will try to completely oversell the singularity and the time at which it will come.

Putting that aside - yes, future changes are hard to comprehend and predict. It's like industrial revolution but on steroids so it's hard to imagine what will happen. Put your hopes away if you don't want to get disappointed because while all the things you mentioned should be possible they are not guaranteed to be achieved. When it happens you'll know but probably only after the fact. It's like it was with ozone depletion - we were shitting ourselves and trying to prevent it until levels stopped dropping and we could say in retrospective that the crisis is slowly going away. Singularity will probably be like this - you won't notice it until it's already in the past.

−1

RadRandy2 t1_jch2zzq wrote

Look, we're all assuming here. You, me, everyone else, we're all just throwing possibilities out there. I like to think intelligence on a Godlike scale will correlate with benevolence, but I could be wrong. Maybe this Godlike AI will in fact be even more corrupted from it.

I'm just confident that anything will be better than what we currently have as far as governance is concerned.

1

feelmedoyou t1_jcgvgwb wrote

When you think about what video games are, which are essentially an exercise in creating dynamic virtual spaces, we were limited to pretty much hard-coding everything: gameplay, npcs, quests, actions. This reflects in the structure of games: if you want to roleplay as a soldier, you need a first-person shooter, if you want fantasy, you need an rpg. The actions of the player are limited to mostly combat and planned conversations with the npcs. Every game is disconnected from others and there is no compatibility.

The real goal, I think, has always been to simulate a fully dynamic and "smart" environment in the virtual. Maybe we are now at the point where something like that is possible, where the virtual is generated in real time by the AI and can be fully controlled to whatever conditions we want. So there would no longer be a need for separate hard-coded spaces. It will just be one big world with many locales to visit and experience.

The apps that we have, the internet, games, and so on, are a product of our desire to jump into a space that is like the real world but is detached from the limitations of the real world and the consequences of being in a physical body and bound by physical laws. Essentially, we want a virtual space to dream on endlessly fulfilling all kinds of desires. The question then arises, what's going to happen to our "real" world when the virtual becomes too enticing.

1

qrayons t1_jcglmnc wrote

Yeah right now most people barely have the hardware that you would need to generate the environments/actors, let alone experience it in VR at the same time. I think what we'll see as an intermediate step is an explosion of content created for VR ahead of time, and then you'll be able to step in and experience it. It'll be a while before we can simultaneously generate and experience.

3

Nanaki_TV t1_jcftw55 wrote

I believe the only thing lacking at this point is capable hardware. We are going to need more VRAM on our machines at home. Hopefully that price comes down quickly like we saw with data on hard drives. My 12 GB VRAM is barely enough for me. Could use 24 but would like over 100.

3