Recent comments in /f/singularity

Eleganos OP t1_jd052wv wrote

A small group trying this plan, while making the secretive issues easier and increasing the likelihood of functional cooperatikn between the parties, magnifies the logistics issues to ludicrous levels, and would make it impossible for them to completely saturate all world governments to ensure they don't suffer nuclear retaliation.

Moreover, if the time between them getting their AGI and enacting their plan is inferior to the time it would take, say, the American military to crack ASI then AGI, then they'd be staring down the sights of an equally deadly beast to theirs with far more resources with which to counteract their own plans.

If they were to try and speed run a bio weapon, for example, and the Government managed to hole up in the Pentagon and finish this counter-AGI, a cure could be created for their bioweapon and the culprits subsequently nuked from existence.

7

Surur t1_jd04mt7 wrote

Some of those are intrinsic (like health) but most other things depend on society to give them value.

Say for example you are a property tycoon with numerous skyscrapers in New York. When most of Manhatten is dead, your property is worthless.

Or say you have a mega-yacht like Bezos, you sail it to Tahiti, but when you get there the local population and tourist attractions are empty, because everyone is dead.

And who are you impressing with your gigantic yatch when 99% of people are dead, and the other 1% can just get their robots to build a similarly sized boat?

10

Eleganos OP t1_jd03z69 wrote

I've addressed another comment or regarding that scenario.

Basically, a singular actor turns it from 'the rich will will us all' to 'a madman who happens to be rich/powerful will murder us all'.

While the odds of it succeeding and devastating humanity go up by an order of magnitude compare to most other scenarios, there are eight billion humans, which means there's 8 billion-in-one scenarios, 8000 million-in-one scenarios, and 8,000,000 one-in-a-thousand scenarios between us and our potential killer.

And I mean that as in 'between them and the number 0'.

Statistically speaking, something is bound to happen that would see a good chunk of humanity survive their attempt, and more than likely kill them in retribution, or just outlive them.

Still an apocalyptic scenario, but not an extinction scenario, and our species could well rebuild via AGI after all was said and done.

I'm not going to pretend that that outcome is a certainty though. I just side with statistical probability and Murphy's law for it.

1

TinyBurbz t1_jd03evn wrote

Yeah, but schools offer CNC machining and start you on the path with basic metal working. Likewise, you don't just write essays in school. In other classes, like history, you are assigned an essay to write to show you grasp at least one small portion of the topic. Essay writing demonstrates a grasp of knowledge on a topic, it is not the purpose of the topic.

Unlike a calculator, GPT isn't integral to high level completion of any task. Where as primitive math computers, like the abacus, have always been a need in mathematics.

1

D_Ethan_Bones t1_jd034a6 wrote

If ultra elites got killbots they would be turning them on rivals.

"I'll unite the plebs to use them against you" would be at least one super richguy's strategy.

If the machine is of superhuman intelligence then it's not going to be bound by human instructions, it's not a soldier because it can't be commanded.

9

YoAmoElTacos t1_jczzlr1 wrote

The crux here is how costly the CEO AI is.

If we have OpenAI running every corporation's CEO AI, we have a mega monopoly.

If the CEO AI is just a little cheaper than a human CEO and gives results that are just a little more efficient, we maintain the current system.

If the CEO AI cannot measurably beat human CEOs except over the long term (which can be highly likely due to the many intangibles), we won't see this except by forced acquisitions by AI firms of human ones.

If the CEO AI is runnable on a laptop...

2

InquisitiveDude t1_jczyfi8 wrote

Writing and making arguments is a great skill for children to learn so I’d hate to see it discarded entirely.

I’d ramp up in-class testing and decrease the amount of homework essays. Any homework would be geared toward learning a concept (I.e. read this chapter on x) how the students actually learn is up to them. The tests would be in a controlled environment and make certain that they understand the subject.

1

Eleganos OP t1_jczxr2y wrote

The issue with that is if America gets the military might to kill the world, why wouldn't they just conquer it instead?

Unless your saying it's just the rich/powerful of America, and the lower classes in America will also be getting it.

In which case, while it might not suffer from the logistics, cooperation and government issues, it does however concentrate the issue into one nation for the most part. Meaning any nation with undisclosed/secretive nukes like Israel could impose M.A.D. on America and threatening nuclear war if they just start Mass killing/conquering their way throughout the globe.

Moreover, when it's specifically America, I can't see them making enough headway into specifically China to successfully subvert them enough to render the threat of their nuclear stockpile null and void. Mostly, and ironically, due to this authoritarian construction and the lack of probable willingness for Xi to give up his throne to make his life slightly better, if not stay the same or even cause him to lose some power due to being one of a bunch rather than a sole authoritarian tyrant with his own kingdom.

Of course an AGI might get anti- nukes tech...but I don't see how that could happen without WW3 breaking out shortly after.

Overall, it has better odds in general when concentrated on America, but runs into an even more immovable roadblock than the rich illuminati route.

5