Recent comments in /f/singularity

Dolnen t1_jdrmt8y wrote

I think this line of reasoning is pointless, or at least it has unnecessary steps. What is the nature of the reality of that prompter? Is his the ultimate reality? How did that reality come about? The same questions we ask about our reality would still persist. It's an endless, paradoxical loop

5

NonDescriptfAIth t1_jdrmjox wrote

This is a very age dependent question.

I think to be prudent, one should assume that very little will change in the next 15 years.

Beyond that, I think it would be borderline ludicrous to assume that the economy will function in a way that even slightly resembles what it does today.

Universal basic income seems like a natural path. I won't entertain a reality in which AGI is realized and UBI does not exist in some form, this would be a dystopian place to exist unworthy of any meaningful planning now beyond 'buy land and build a bunker'.

The majority of white collar jobs will be gone. With only niche or heavily modified roles remaining.

New jobs will emerge to promote human wellness. Things like the government paying people to go walking together or to spend time with the elderly.

Technical labour jobs will probably be the last things to go, things like electricians, plumbers, fireman, paramedics. Expect these positions to be highly esteemed and massively compensated financially.

There will be a lot of work to do in rolling out this tech internationally. Eliminating poverty globally and the like will probably be a priority for the newly redundant white collar professionals looking for something meaningful to do with their time and money.

I think the very notion of retirement will become fuzzy and quick. What exactly is retirement in a world where there is no work to retire from?

Realistically 'stopping working' will become equivalent to 'reducing daily activity'. This will probably be discouraged massively given that daily engagement in both physical and mentally challenging tasks is a huge predictor of health in older age. People usually die soon after retirement. If the nature of 'work' is enjoyable and promotes a healthy lifestyle, why not extend it as long as possible?

Apologies to the Frenchmen reading this.

14

banned_mainaccount t1_jdrk7hh wrote

the delay increases the farther they're from us. so we should atleast observe something in our closer galaxies. 1000 years are nothing in evolution of a species. so we shouldn't at least found something in our 1000 lightyear radius. i think occurrence of intelligent life, or life in general, is more rare than people might think. so many random factors have to come in precisely right place and time for it work, at all.

1

shillingsucks t1_jdrjmcc wrote

Not typing with any sort of confidence but just musing.

Couldn't it be said that humans cheat mentally as well for this type of task? As in I am not aware of anyone who knows how a sentence that they are thinking or speaking will end while they are in the middle of it. For us we would need to make a mental structure that needs to be filled and then come up with the sentence that matched the framework.

If the AI often gets it right on the 2nd try it makes me wonder if there is a way to frame the question initially where they would have the right framework to get it right on the first guess.

1

banned_mainaccount t1_jdrjcaq wrote

i appreciate the enthusiasm but correlation ≠ causation. yes game and reality are similar but that doesn't mean reality is game or game is reality. and very interesting pattern to notice is that in the age of books people thought the world is just a big story, in this age of games people think the world is just a game, in future people will think the world is just a vr. but it's actually the opposite this technology tries to replicate the reality not the vise versa

2

banned_mainaccount t1_jdrhosy wrote

think intelligence as a protective mechanisms in animals, some have horns, some have teeth some have shell they can hide in, and some have better understanding of surroundings and Better pattern recognition to survive in predatory world. when you see it as a qualitative feature rather than quantitative one, then things make more sense. statistically it's very possible for millions of living things to exist in this vast universe, but it's quite improbable for them to have intelligence. just look around the species we coexist with. they're all aliens with most of them having very low intelligence. the most intelligent of them are our close descendants, monkeys. and our closest descendants don't have nearly as iq as us which just tells us that intelligence is not necessary for living beings. so there are definitely aliens around us, but we can't see them because they can't make a significant change to their planet so we could notice them. i think scientists should look for the planets with continuously changing atmosphere because intelligence is only "adoptive" protective mechanism that can get a species through constantly changing atmosphere

1

jloverich t1_jdrgd0p wrote

Tbh, I parrot the value and then add 5 3 times to double check. One of the other things these chatbots aren't doing is double checking what they just spoke otherwise one of their statements would be immediately followed by another, "oh, that was wrong". Instead you need to prompt them that it was wrong.

9

EthanPrisonMike t1_jdrg4v0 wrote

Well considering the almost feudal underpinnings of capitalisms current cycle I don't see this happening organically. Especially if all that's needed to automatically poison the informational well is training an AI on some Stan accounts.

I'm more interested in programming that empowers and increases the utility of individual citizens to police the system themselves. Imagine each US citizen with the ability to organize like Obama ? Lobby like McConnell ? Raise Awareness like Sanders ?

That's the only hope imo

1

Objective_Fox_6321 t1_jdrfn25 wrote

It's really simple, actually, LLM isn't doing the math it's only goal is to guess what word/token comes next. Depending on the temperature and other internal factors, LLMs output the most weighed answer.

It's not like an LLM has a built-in Calculator unless it's specifically told to do so, by the user.

With lang-chain, however, you can definitely achieve the goal of having an LLM execute a prompt, import code, open a library, etc, and have it perform non-native tasks.

But you need to realize an LLM is more like a mad lib generator, fine-tuned with specific weights in mind for explicit language. Its goal is to understand the text and predict the next word/token in accordance with its parameters.

6

lightinitup t1_jdrfkln wrote

I know you are joking, but I just wanted to point out that the difference is that in Schindler's List, you empathize with the Jews. In this, Kyoko is basically objectified and dehumanized. People of color already have issues with representation in media, and it further reinforces stereotypes. I hope you can understand what it can be damaging to the community.

1

WarmSignificance1 t1_jdrfdmj wrote

There is no future world in which having more assets is a bad thing. The worst case scenario, which I find to be extremely unlikely, is that assets no longer have value. In a situation like that, we'll either all be dead or have everything we need.

Just think about the technology that we have invented over the last 100 years. Someone born 100 years ago today witnessed massive changes, and yet, life is still pretty much the same as is was back then, just a lot better. I think that it is quite likely we will experience the same thing.

16

RadioFreeAmerika t1_jdrezc7 wrote

Could be. I asked in another post about LLMs and maths capabilities, and it seems that LMMs would profit greatly from the capability to do internal simulations. LLMs can't do this currently, and people commented that in the Microsoft paper, they state that (current?) LLMs models are conceptually unable to do more than linear sequence processing of one sequence. Possible workarounds are plug-ins or neuro-symbolic AI models.

Nevertheless, maybe our reality is just the internal simulation of an ASIs prompt response. Who knows, would that be ironic?

Your second question is an eons-long discussion and greatly depends on how you define god.

5

lightinitup t1_jdrehqq wrote

This article goes into more depth:

https://medium.com/science-technoculture-in-film/objectification-and-abjectification-in-ex-machina-and-ghost-in-the-shell-b126b8832a1d

But TLDR, >!the character of Kyoko needlessly reinforces the negative stereotype of subservient Asian woman. It would have been a great ending that she gets revenge on Nathan, but she ultimately dies in the process, ultimately reinforcing her, and her identity as a means to an end.!<

1

GoodAndBluts t1_jdrd8q1 wrote

I am mid 50s, and will retire within the next 5 years (maybe sooner, maybe later, maybe it will be forced upon me since I am an older person in the software world). If I get laid off and have to retire because of GPT, so be it.

But... For the last few years it has haunted me that maybe my savings have to support me and my 2 children. Even before chatGPT I spent time wondering how exactly they will be able to make a good, secure living with things like automation and outsourcing eating away at their ability to make good money

I have been processing my thoughts on chatGPT and I am not sure it is the risk that everyone thinks it is - but even so, things are changing so rapidly - how can you pick a career - any career- and expect it is still going to be viable in 10 years?

43