Recent comments in /f/Futurology

Few_Carpenter_9185 t1_j8uuj5t wrote

There are a lot of angles to this.

What is the dividing line between a system that can replicate all the responses and attributes of metacognition, awareness, and independent executive agency, and a system that actually has them?

And as weak-AI or machine learning produces ever more complex results without actual self-awareness, that might deflect a lot of the motives to develop a strong-AGI. And that's assuming we even know what that actually is, orif we can discover how it could be done.

And for better or worse, all inventions to date have increased or magnified human abilities overall, even when it displaced workers, or is used to kill or control each other. So it's possible that AI in various varieties won't really be any different.

There's the claim that AI, weak or strong, is "different" in that it has the potential to displace any and all human work or activities, and dire warnings about universal unemployment and "digital serfdom" are made. But we might not be looking at the right problems at all.

100% productivity & efficiency could mean the cost-basis for anything, everything, falls to zero. If that gets combined with sufficient sustainable energy, and aggressive recycling, what to do when no one has income might just fade in the face of how society functions when everything is free.

Especially if the link between higher living standards and lower non-replacement birthrates continues. We could be facing a functionally infinite supply, combined with shrinking demand.

As to creating safeguards because AGI might find humans inefficient, a threat, and competition for resources, and even if they have code or laws embedded in them to obey, or care about humanity, but could alter or disable them... I have an analogy.

As humans or just mammals, we have some pretty strong hard-wired systems to love our children and sacrifice to care for them. Say I could offer you a pill that would suppress or delete those hormones, neurons, and instincts, and once taken, you could abandon your children or family and be free to do as you please, feeling no guilt or pain at doing so?

How many that didn't already have something wrong with them, or already had neglected, abused, or abandoned their children or family would willingly take the pill?

On the flipside, there's conceivable advantages to an amoral or otherwise aggressive AI that doesn't have any concerns about human existence and can act in perpetual offense. And a friendly or good AI that strives to help or protect humanity, would have an arguably huge disadvantage always having to act on defense.

Imagine two children on a beach, one kind, one is a bully. The bully wants to kick the sand castle, the kind child wants to protect it. The bully only has to succeed once, the kind child has to succeed every time in every way.

Although, kicking human sand castles could be rather irrelevant. A strong-AGI could have an existence and priorities that are very very different than the single linear and mortal existence we are used to, and are underlying many of our base assumptions about what it means to "be alive".

An AGI could run innumerable copies of itself in parallel to accomplish tasks. Anything it found unpleasant, like dealing with humans, because they're slow, inefficient, or random, it can create copies of itself edited so that doesn't bother them. If one copy running somewhere is shut off, erased, or otherwise destroyed somehow, all the other instances of its consciousness may not care, or even consider itself to have been injured or to have "died".

And it probably won't have competitive sexual mammal drives that color almost every aspect of what humans do, but we just take for granted because it's nearly impossible for a human to truly step out of them into some other perspective.

So that could make a strong-AGI very non-comptitive with humans, and performing useful tasks for us are seen as trivial.

On the other hand, if it decides that it should compete with us, perhaps because without humans, all available energy and resources can be devoted to running bigger, better, or more copies of itself, all the above aspects could make it nearly impossible to stop.

The oldest H. Sapiens bones or fossils discovered so far are about 300,000 years old. Based on that, we've only had agriculture of any kind for 6% of our existence. Cities of any sort for about 3%. Kingdoms, empires, or the modern nation-state for about 1%...

We may not know or understand what these very basic concepts surrounding human civilization mean, or understand what the implications for us are yet. Now add in the Industrial Revolution, Electricity, the internal combustion engine, electricity, radio, television, antibiotics, computers, social media... the number of zeroes behind the decimal place on those percentages are so many, it's arguably not worth writing them down.

So when it comes to machine learning and possible strong-AGI? With the potential aspects of infinite promise, wanton destruction, or even human extinction involved? Nobody knows. And anybody who claims they do is lying, possibly even to themselves.

2

johnsmithbonds8 OP t1_j8uq9gh wrote

Individual material accumulation for it’s sake, at a global scale, is not exactly a sure-fire way to achieve…much.

It seems like the true power of AI is going to be diametrically opposed to the current status quo as soon as it reaches a certain level of ‘intelligence’.

As we approach these unchartered levels of sophistications and AGI’s etc. become a true imminent threat, it seems to me that like most tools of power available to the masses will be labeled, hoarded, and restricted by the same...people.

So can the tech advance fast enough before it is sterilized and can have enough oomph to make core changes vs optimizing what we have today.

We’ll see?..

1

Strict-Research-7413 t1_j8uo56e wrote

You went immediately to an extreme. No, what I said is that there will be first class citizens that can afford being GMO’s, and then there will be normal people who can’t. And yes, we can fix the environment but we are too lazy as a species and prefer short term over long term benefits. By the way, it’s not that we are too arrogant to control a baby’s intelligence, it’s just that we can’t as of right now, and we shouldn’t try. Genetically modifying humans is a slippery slope.

You picked and chose only the words you needed to create a whole different message than the one I delivered.

2

idranh t1_j8uj4t3 wrote

Well, I just gave you an upvote! People would be fine with your predictions if it was 2125-2132. Epochal events happening within their lifetimes and so close too? It's bound to get a negative reaction. I do respect the fact you've stuck with your guns and held your own feet to the fire.

1

izumi3682 OP t1_j8ugejf wrote

hiya mr longjumpers! Gosh I haven't seen you in a month of Sundays! Are you well?

I just want to get the word out. Nobody can really prepare for a TS. We as humans in human society and human civilization do what we can do, until we can't do it any longer. I still maintain that the TS will be most favorable to humanity--as much as a TS can be.

Having said that, I still maintain that this will be close to what we see in the near, mid and and definitely not that distant of a future.

https://www.reddit.com/r/Futurology/comments/7gpqnx/why_human_race_has_immortality_in_its_grasp/dqku50e/

3

Rich_Hedberg t1_j8ubzal wrote

This headline is definitely playing off the fact people would assume it's maybe US or EU or something. Headlines like this infuriate me- communicating information is important but like everything else in our society the spirit of it has been stripped of value and optimized only to generate revenue, in this case in the form of engagement.

1