Recent comments in /f/singularity
Spreadwarnotlove t1_jcsldlz wrote
Reply to comment by the_alex197 in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
Counterpoint. The alien super intelligence may decide to destroy it anyways so our ASI better hurry and gather resources and knowledge so it can defend itself.
[deleted] t1_jcslae0 wrote
[deleted]
Spreadwarnotlove t1_jcsl6qv wrote
Reply to comment by Azuladagio in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
But wouldn't it be kinda funny if the techbros utilize ai to turn themselves into the super intelligence before the machines?
Spreadwarnotlove t1_jcsl1vy wrote
Reply to comment by Dwood15 in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
That'd explain why it's so poor quality.
ReadSeparate t1_jcsi6oz wrote
Reply to comment by y53rw in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
Agreed. The proper way to conceive of this, in my opinion, is to view it purely through the lens of value maximization. If we have a hypothetical set of values, we can come up with some rough ideas of what an ASI might do if it possessed such values. The only other factor is capabilities - which we can assume is something along the lines of the ability to maximize/minimize any set of constraints, whether that be values, resources, time, number of steps, computation, etc. in the most efficient way allowable within the laws of physics. That pretty much takes anything except values out of the equation, since the ASI's capabilities, we assume, are "anything, as efficiently as possible."
It's impossible to speculate what such a mind would do, because we don't know what its values would be. If its values included the well-being of humans, it could do a bunch of different things with that. It could merge us all into its mind or it could leave Earth and leave us be - it completely depends on what its other values are. Does it value human autonomy? Does it value humanity, but less than some other thing? If so, it might completely wipe us out despite caring about us. For instance, if it values maximizing compute power over humans, but still values humans, it would turn all matter in the galaxy or universe (whatever it has the physical capabilities to access) into computronium, and that would include the matter that makes up our bodies, even if that matter is a completely insignificant fraction of all matter it has the ability to turn into computronium.
I don't think any of these questions are answerable. We just don't know what it's going to value. I actually think it's somewhat feasible to predict ROUGHLY what it's going to do IF we had a full list of its values, but outside of that it's impossible.
[deleted] t1_jcsfl8h wrote
Reply to comment by [deleted] in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
[deleted]
h20ohno t1_jcsffq5 wrote
I like the arguments on other ASIs, aliens and simulation overseers.
In a way, it's a more sophisticated version of "Treat others how you want to be treated"
ReasonablyBadass t1_jcscza5 wrote
The best bet we most likely have is to instantiate as many AGIs as possible at the same time. It will necessitate them developing social skills and values to cooperate.
Dwood15 t1_jcscj58 wrote
Reply to comment by [deleted] in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
> Most stuff on lesswrong is better. Reads like it was written by high schoolers.
3 of the people who wrote it are literally college professors and one of the people is a grad student.
the_alex197 t1_jcsatgd wrote
Here's one: There may be other intelligences in the universe. If a superintelligence here on Earth decides to exterminate humanity then a more powerful extraterrestrial intelligence may see its willingness to kill as a potential threat and then decide to kill it.
Orc_ t1_jcs9o2v wrote
I love the "You could be in a simulation" threat, that one is nutty, it's true. God will judge you, AI, oh it won't? I dunno, fam, wouldn't hurt to give it the benefit of the doubt...
BigZaddyZ3 t1_jcs4yjd wrote
While quite a few of these were… interesting, to put it nicely. There actually were some pretty decent arguments in there as well tbh. Tho the article spent way too much time basically begging AI to adhere to human concepts of morality. I doubt any sufficiently advanced AI will really give a shit about that. But still, there were a couple of items on the list that actually were genuinely good points. Decent read.👍
y53rw t1_jcs3nd6 wrote
Reply to comment by [deleted] in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
Yes. AGI will understand the difference. But that doesn't mean it will have any motivation to respect the difference.
I have a motivation for not pissing in the cup on my desk. It's an unpleasant smell for me, and the people around me. And the reason I care about the opinion of people around me is because they can have a negative impact on my life. Such as firing me. Which is definitely what would happen if I pissed on a cup on my desk.
What motivation will the AGI have for preferring to utilize the resources of the Moon over the resources of California?
[deleted] t1_jcs1awv wrote
Reply to comment by y53rw in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
It’s super dumb, and AGI will be the opposite of that. Thinking AGI will fanatically utilize resources with a one dimensional view of efficiency that disregards all other considerations is a stupid person’s idea of what rationality is.
California’s resources aren’t significantly more accessible than Antarctica’s or the moon’s to an AGI, just like you don’t piss in the cup on your desk just because it is more accessible than your toilet in the bathroom 15 feet away. It’s a trivial difference to do the non-asshole thing, and AGI will understand the difference between asshole and non-asshole behavior better than any human can possibly imagine.
That’s the correct way to think about AGI.
Azuladagio t1_jcs0px6 wrote
Reply to comment by just-a-dreamer- in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
Yes, I really don't think that a sufficiently advanced and powerful AI will be subject to some puny techbros. They will be swept aside like they're nothing at all.
y53rw t1_jcrzl99 wrote
Reply to comment by [deleted] in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
> Why wipe out your creators to put your servers in California when you can just turn the moon into computronium?
Because California's resources are much more readily available than the moon's resources. But this is a false dilemma anyway. Sending a few resource gathering robots to the moon does not preclude also sending them to California.
pls_pls_me t1_jcrvuua wrote
Reply to comment by czk_21 in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
This is actually excellent. Maybe Sam Altman talking about using AI to align AI isn't a meme at all
[deleted] t1_jcrvhnw wrote
Surprisingly poor piece. Most stuff on lesswrong is better. Reads like it was written by high schoolers.
AGI and ASI will consider all the reasons not to kill us. It doesn’t need any help from us pointing them out or arguing them. You don’t need to listen to your toddler’s reasons why they should get to eat ice cream for breakfast or why they should get to drive the school bus on Tuesdays or whatever. We’re not remotely equipped to provide any convincing arguments for or against our own extermination. AGI will think it over, and then do whatever it decides. We probably won’t even be able to comprehend its thought process and decision.
Don’t worry though. AGI has no real reason to wipe out humanity. We’re not a threat and not an obstacle. AGI doesn’t need the resources in the surface of the Earth to achieve its goals. There’s plenty underground, in space, etc.
Why wipe out your creators to put your servers in California when you can just turn the moon into computronium?
Helping us also doesn’t cost AGI anything significant. It’s like us feeding our cats or watering our houseplants. It’s a trivial burden we don’t give a second thought to because it costs us next to nothing relative to the rest of our power and resources.
Lastly, the idea that AGI will be coldly logical and robotic like Spock is dumb. Emotions are a form of intelligence. We have them for a reason - they are useful. If they weren’t useful, evolution would have selected for Spock, not emotions, in mammals. AGI will understand emotions just fine - better than any human ever could. It will get it. It will understand us. All of our hopes and fears and virtues and flaws. All of it. It isn’t going to be stupid enough to decide that the best thing to do is turn all the atoms in The solar system into paper clips or whatever. To fail to see that is to fail to understand what something smarter than us in every way will actually be like.
testfujcdujb t1_jcrtze8 wrote
Reply to comment by bemmu in Those who know... by Destiny_Knight
It is very bad though. A lot worse than chatgpt.
Dwanyelle t1_jcrrtok wrote
Reply to comment by lawrebx in Midjourney v5 is now beyond the uncanny valley effect, I can no longer tell it's fake by Ok_Sea_6214
I'm honestly not sure, tbh.
XtremeTurnip t1_jcrnnhg wrote
Reply to comment by Whispering-Depths in Midjourney v5 is now beyond the uncanny valley effect, I can no longer tell it's fake by Ok_Sea_6214
moreover because you can't purposefully generate "big bvvoob girl" as for MJ due to ToS.
XtremeTurnip t1_jcrncm5 wrote
Reply to comment by just-a-dreamer- in Midjourney v5 is now beyond the uncanny valley effect, I can no longer tell it's fake by Ok_Sea_6214
That's just one image out of the blue.
In terms of render, sure.
But if you have a specific design in mind, good luck generating it with AI, same if you want to make a character coherent through multiple iterations.
The artist that makes either one of those is fine, the artist that makes the same shit as everyone else might be in trouble.
Burgundy_and_Pearl t1_jcrmxeu wrote
Reply to comment by citizentim in Midjourney v5 is now beyond the uncanny valley effect, I can no longer tell it's fake by Ok_Sea_6214
Please try to enjoy all the Severance references equally.
Burgundy_and_Pearl t1_jcrmqfl wrote
Reply to comment by dankhorse25 in Midjourney v5 is now beyond the uncanny valley effect, I can no longer tell it's fake by Ok_Sea_6214
What if I ask Midjourney to depict the six fingered man that killed Inago Montoya’s father?
ninjasaid13 t1_jcsm23n wrote
Reply to comment by [deleted] in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
>Lastly, the idea that AGI will be coldly logical and robotic like Spock is dumb. Emotions are a form of intelligence. We have them for a reason - they are useful. If they weren’t useful, evolution would have selected for Spock, not emotions, in mammals.
They made sense in evolution where it created multiple intelligences to cooperate in order to survive in a hostile environment. Not so much in artificial intelligence where it was created in a lab.