Recent comments in /f/singularity

Frone0910 t1_ja8o8hf wrote

Very interesting, I'm not sure about the automation of labor. I think there is a future in which we have people overseeing these tools and managing to continue to exist for a long period of time. But once the singularity is reached, most people will be checking out. Very difficult to say tbh because perhaps we'll be able to augment ourselves to stay relevant.

9

BlueShipman t1_ja8kpvz wrote

> He pointed to the Trump campaign having run "the single best digital ad campaign I've ever seen from any advertiser. Period."

Wow, what a psyop

>"They weren't running misinformation or hoaxes. They weren't micro targeting or saying different things to different people," Bosworth wrote. "They just used the tools we had to show the right creative to each person. The use of custom audiences, video, ecommerce, and fresh creative remains the high water mark of digital ad campaigns in my opinion."

Whoa, this is pretty much a full blown psyop at this point.

1

DadSnare t1_ja8ibe5 wrote

That’s fine, but even in your post I’m seeing some easy-to-claim stuff that has no solid basis. Are you sure that the programmers cannot explain why a chatbot errors out? Really? Also, who said anything about the emotional state of an AI? That’s hardly even possible because it doesn’t have an endocrine system. We may have strong emotions the way we do to help with memory formation and retrieval as much as anything else. That’s not a problem for a machine. What’s a plausible way we get destroyed? Does AI own the corporations too? How do I lose power, internet, food, etc,? The nuclear terminator version seems impossible unless we are going talk about hacking brains and adjusting behavior like crazy people think is possible.

1

QuantumPossibilities t1_ja8i0k0 wrote

I agree with your premise but categorizing what the Chinese government is doing as "open science" is a bit of a stretch. Who are they sharing it with exactly besides their own government institutions or government backed companies? It's not like China is sharing their AI with the rest of the global scientific community to promote humanity.

−1

iiioiia t1_ja8hz5j wrote

> If they're only starting now, they're helplessly behind all the companies that took notice with GPT-3 at the latest.

One important detail to not overlook: the manner in which China censors (or not) their model will presumably vary greatly from the manner in which Western governments force western corporations to censor theirs - and this is one of the biggest flaws in the respective plans of these two superpowers for global dominance, and control of "reality" itself. Or an even bigger threat: what if human beings start to figure out (or even question) what reality (actually) is? Oh my, that would be rather inconvenient!!

Interestingly: I suspect that this state of affairs is far more beneficial to China than The West - it is a risk to both, but it is a much bigger risk to The West because of their hard earned skill, which has turned into a dependence/addiction.

The next 10 years is going to be wild.

13

isthiswhereiputmy t1_ja8hx20 wrote

It becomes more difficult to specify what will happen in which decade or century the further we extrapolate out.

I can imagine consciousnesses in 'the cloud' could occur sooner than 200 years from now. Some things do just take time though even at peak development efficiency. The idea of practically building some massive energy transformer or engineering a planet is something that could take centuries.

I really don't think we'll ever have our flesh bodies travelling the stars like in Star Trek.

1

Five_Decades t1_ja8htww wrote

We really can't predict it because the underlying science that'll make future technology possible probably hasn't been discovered yet. Someone from the 19th century wouldn't be able to fathom atomic bombs, quantum computers, 5nm processors, etc because the underlying science for these hadn't been invented yet.

I assume matrioshka brains and dyson spheres will exist 200 years from now. Faster than light travel if its possible. Beyond that, who knows. Maybe we will know how to change the laws of the universe by then.

4

Enlightened_Neander t1_ja8fzph wrote

This a great question to ponder; however I find that 20 years from now is extremely difficult to forecast never mind 200.

In 20 years, I would parallel some of Kurzweil's sentiments of human merging with AI/technology at a cellular level.

200 years will/may have vastly new technology that we can't even fathom. AI/Human N.0 may create an increasingly complex being that is able to understand more layers of reality. For all we know, we may be arguing over 1% of reality.

2

gantork t1_ja8fj5n wrote

I don't fully agree with that, all the advantages of big studios that you described above have already existed for a long time even without AI, yet the indie game dev market is huge, from solo devs to small teams, because not everyone likes AAA games. They might dominate in market share (I don't know the actual numbers) but there's still a place to make a ton of money as an indie dev.

1