Recent comments in /f/singularity

ninjasaid13 t1_jcsm23n wrote

>Lastly, the idea that AGI will be coldly logical and robotic like Spock is dumb. Emotions are a form of intelligence. We have them for a reason - they are useful. If they weren’t useful, evolution would have selected for Spock, not emotions, in mammals.

They made sense in evolution where it created multiple intelligences to cooperate in order to survive in a hostile environment. Not so much in artificial intelligence where it was created in a lab.

4

ReadSeparate t1_jcsi6oz wrote

Agreed. The proper way to conceive of this, in my opinion, is to view it purely through the lens of value maximization. If we have a hypothetical set of values, we can come up with some rough ideas of what an ASI might do if it possessed such values. The only other factor is capabilities - which we can assume is something along the lines of the ability to maximize/minimize any set of constraints, whether that be values, resources, time, number of steps, computation, etc. in the most efficient way allowable within the laws of physics. That pretty much takes anything except values out of the equation, since the ASI's capabilities, we assume, are "anything, as efficiently as possible."

It's impossible to speculate what such a mind would do, because we don't know what its values would be. If its values included the well-being of humans, it could do a bunch of different things with that. It could merge us all into its mind or it could leave Earth and leave us be - it completely depends on what its other values are. Does it value human autonomy? Does it value humanity, but less than some other thing? If so, it might completely wipe us out despite caring about us. For instance, if it values maximizing compute power over humans, but still values humans, it would turn all matter in the galaxy or universe (whatever it has the physical capabilities to access) into computronium, and that would include the matter that makes up our bodies, even if that matter is a completely insignificant fraction of all matter it has the ability to turn into computronium.

I don't think any of these questions are answerable. We just don't know what it's going to value. I actually think it's somewhat feasible to predict ROUGHLY what it's going to do IF we had a full list of its values, but outside of that it's impossible.

1

BigZaddyZ3 t1_jcs4yjd wrote

While quite a few of these were… interesting, to put it nicely. There actually were some pretty decent arguments in there as well tbh. Tho the article spent way too much time basically begging AI to adhere to human concepts of morality. I doubt any sufficiently advanced AI will really give a shit about that. But still, there were a couple of items on the list that actually were genuinely good points. Decent read.👍

4

y53rw t1_jcs3nd6 wrote

Yes. AGI will understand the difference. But that doesn't mean it will have any motivation to respect the difference.

I have a motivation for not pissing in the cup on my desk. It's an unpleasant smell for me, and the people around me. And the reason I care about the opinion of people around me is because they can have a negative impact on my life. Such as firing me. Which is definitely what would happen if I pissed on a cup on my desk.

What motivation will the AGI have for preferring to utilize the resources of the Moon over the resources of California?

8

[deleted] t1_jcs1awv wrote

It’s super dumb, and AGI will be the opposite of that. Thinking AGI will fanatically utilize resources with a one dimensional view of efficiency that disregards all other considerations is a stupid person’s idea of what rationality is.

California’s resources aren’t significantly more accessible than Antarctica’s or the moon’s to an AGI, just like you don’t piss in the cup on your desk just because it is more accessible than your toilet in the bathroom 15 feet away. It’s a trivial difference to do the non-asshole thing, and AGI will understand the difference between asshole and non-asshole behavior better than any human can possibly imagine.

That’s the correct way to think about AGI.

7

y53rw t1_jcrzl99 wrote

> Why wipe out your creators to put your servers in California when you can just turn the moon into computronium?

Because California's resources are much more readily available than the moon's resources. But this is a false dilemma anyway. Sending a few resource gathering robots to the moon does not preclude also sending them to California.

9

[deleted] t1_jcrvhnw wrote

Surprisingly poor piece. Most stuff on lesswrong is better. Reads like it was written by high schoolers.

AGI and ASI will consider all the reasons not to kill us. It doesn’t need any help from us pointing them out or arguing them. You don’t need to listen to your toddler’s reasons why they should get to eat ice cream for breakfast or why they should get to drive the school bus on Tuesdays or whatever. We’re not remotely equipped to provide any convincing arguments for or against our own extermination. AGI will think it over, and then do whatever it decides. We probably won’t even be able to comprehend its thought process and decision.

Don’t worry though. AGI has no real reason to wipe out humanity. We’re not a threat and not an obstacle. AGI doesn’t need the resources in the surface of the Earth to achieve its goals. There’s plenty underground, in space, etc.

Why wipe out your creators to put your servers in California when you can just turn the moon into computronium?

Helping us also doesn’t cost AGI anything significant. It’s like us feeding our cats or watering our houseplants. It’s a trivial burden we don’t give a second thought to because it costs us next to nothing relative to the rest of our power and resources.

Lastly, the idea that AGI will be coldly logical and robotic like Spock is dumb. Emotions are a form of intelligence. We have them for a reason - they are useful. If they weren’t useful, evolution would have selected for Spock, not emotions, in mammals. AGI will understand emotions just fine - better than any human ever could. It will get it. It will understand us. All of our hopes and fears and virtues and flaws. All of it. It isn’t going to be stupid enough to decide that the best thing to do is turn all the atoms in The solar system into paper clips or whatever. To fail to see that is to fail to understand what something smarter than us in every way will actually be like.

20

XtremeTurnip t1_jcrncm5 wrote

That's just one image out of the blue.

In terms of render, sure.

But if you have a specific design in mind, good luck generating it with AI, same if you want to make a character coherent through multiple iterations.

The artist that makes either one of those is fine, the artist that makes the same shit as everyone else might be in trouble.

1