Recent comments in /f/singularity
NonDescriptfAIth t1_jdrmjox wrote
This is a very age dependent question.
I think to be prudent, one should assume that very little will change in the next 15 years.
Beyond that, I think it would be borderline ludicrous to assume that the economy will function in a way that even slightly resembles what it does today.
Universal basic income seems like a natural path. I won't entertain a reality in which AGI is realized and UBI does not exist in some form, this would be a dystopian place to exist unworthy of any meaningful planning now beyond 'buy land and build a bunker'.
The majority of white collar jobs will be gone. With only niche or heavily modified roles remaining.
New jobs will emerge to promote human wellness. Things like the government paying people to go walking together or to spend time with the elderly.
Technical labour jobs will probably be the last things to go, things like electricians, plumbers, fireman, paramedics. Expect these positions to be highly esteemed and massively compensated financially.
There will be a lot of work to do in rolling out this tech internationally. Eliminating poverty globally and the like will probably be a priority for the newly redundant white collar professionals looking for something meaningful to do with their time and money.
I think the very notion of retirement will become fuzzy and quick. What exactly is retirement in a world where there is no work to retire from?
Realistically 'stopping working' will become equivalent to 'reducing daily activity'. This will probably be discouraged massively given that daily engagement in both physical and mentally challenging tasks is a huge predictor of health in older age. People usually die soon after retirement. If the nature of 'work' is enjoyable and promotes a healthy lifestyle, why not extend it as long as possible?
Apologies to the Frenchmen reading this.
siameseoverlord t1_jdrkvuw wrote
Reply to comment by DaveShap_Automator in Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
Great read. Booyah!
banned_mainaccount t1_jdrk7hh wrote
Reply to comment by Arcady in The whole reality is just so bizzare when you really think about it. by aalluubbaa
the delay increases the farther they're from us. so we should atleast observe something in our closer galaxies. 1000 years are nothing in evolution of a species. so we shouldn't at least found something in our 1000 lightyear radius. i think occurrence of intelligent life, or life in general, is more rare than people might think. so many random factors have to come in precisely right place and time for it work, at all.
0002millertime t1_jdrk65i wrote
Reply to comment by skztr in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
If we look at what are your odds of being a human alive at any particular time, though, then being alive right now has the highest chance. There are more humans alive right now than at any other single time point in the history of the universe.
[deleted] t1_jdrjmvd wrote
Reply to comment by 7734128 in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
[deleted]
shillingsucks t1_jdrjmcc wrote
Reply to comment by RadioFreeAmerika in Why is maths so hard for LLMs? by RadioFreeAmerika
Not typing with any sort of confidence but just musing.
Couldn't it be said that humans cheat mentally as well for this type of task? As in I am not aware of anyone who knows how a sentence that they are thinking or speaking will end while they are in the middle of it. For us we would need to make a mental structure that needs to be filled and then come up with the sentence that matched the framework.
If the AI often gets it right on the 2nd try it makes me wonder if there is a way to frame the question initially where they would have the right framework to get it right on the first guess.
banned_mainaccount t1_jdrjcaq wrote
Reply to comment by Ginkotree48 in The whole reality is just so bizzare when you really think about it. by aalluubbaa
i appreciate the enthusiasm but correlation ≠ causation. yes game and reality are similar but that doesn't mean reality is game or game is reality. and very interesting pattern to notice is that in the age of books people thought the world is just a big story, in this age of games people think the world is just a game, in future people will think the world is just a vr. but it's actually the opposite this technology tries to replicate the reality not the vise versa
Kolinnor t1_jdrjbj5 wrote
Reply to comment by jloverich in Why is maths so hard for LLMs? by RadioFreeAmerika
Yeah, definitely ! Althought I don't think this is an unfixable fundamental flaw, I agree this is a big advantage we still have over them
banned_mainaccount t1_jdri6lk wrote
Reply to comment by HumanSeeing in The whole reality is just so bizzare when you really think about it. by aalluubbaa
can you elaborate "speed of light is instant"
banned_mainaccount t1_jdrhosy wrote
Reply to comment by HatsusenoRin in The whole reality is just so bizzare when you really think about it. by aalluubbaa
think intelligence as a protective mechanisms in animals, some have horns, some have teeth some have shell they can hide in, and some have better understanding of surroundings and Better pattern recognition to survive in predatory world. when you see it as a qualitative feature rather than quantitative one, then things make more sense. statistically it's very possible for millions of living things to exist in this vast universe, but it's quite improbable for them to have intelligence. just look around the species we coexist with. they're all aliens with most of them having very low intelligence. the most intelligent of them are our close descendants, monkeys. and our closest descendants don't have nearly as iq as us which just tells us that intelligence is not necessary for living beings. so there are definitely aliens around us, but we can't see them because they can't make a significant change to their planet so we could notice them. i think scientists should look for the planets with continuously changing atmosphere because intelligence is only "adoptive" protective mechanism that can get a species through constantly changing atmosphere
jloverich t1_jdrgd0p wrote
Reply to comment by Kolinnor in Why is maths so hard for LLMs? by RadioFreeAmerika
Tbh, I parrot the value and then add 5 3 times to double check. One of the other things these chatbots aren't doing is double checking what they just spoke otherwise one of their statements would be immediately followed by another, "oh, that was wrong". Instead you need to prompt them that it was wrong.
EthanPrisonMike t1_jdrg4v0 wrote
Reply to comment by OsakaWilson in What do you want to happen to humans? by Y3VkZGxl
Well considering the almost feudal underpinnings of capitalisms current cycle I don't see this happening organically. Especially if all that's needed to automatically poison the informational well is training an AI on some Stan accounts.
I'm more interested in programming that empowers and increases the utility of individual citizens to police the system themselves. Imagine each US citizen with the ability to organize like Obama ? Lobby like McConnell ? Raise Awareness like Sanders ?
That's the only hope imo
Baturinsky t1_jdrg30j wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
I think it's not that AI is bad at math specifically. It's just that math is the easiest way to formulate a compact question that requires a non-trivial precise solution.
Objective_Fox_6321 t1_jdrfn25 wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
It's really simple, actually, LLM isn't doing the math it's only goal is to guess what word/token comes next. Depending on the temperature and other internal factors, LLMs output the most weighed answer.
It's not like an LLM has a built-in Calculator unless it's specifically told to do so, by the user.
With lang-chain, however, you can definitely achieve the goal of having an LLM execute a prompt, import code, open a library, etc, and have it perform non-native tasks.
But you need to realize an LLM is more like a mad lib generator, fine-tuned with specific weights in mind for explicit language. Its goal is to understand the text and predict the next word/token in accordance with its parameters.
lightinitup t1_jdrfkln wrote
Reply to comment by Bestihlmyhart in Let’s Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
I know you are joking, but I just wanted to point out that the difference is that in Schindler's List, you empathize with the Jews. In this, Kyoko is basically objectified and dehumanized. People of color already have issues with representation in media, and it further reinforces stereotypes. I hope you can understand what it can be damaging to the community.
RadioFreeAmerika OP t1_jdrfevx wrote
Reply to comment by Personal_Problems_99 in Why is maths so hard for LLMs? by RadioFreeAmerika
They named it Bard. What did you expect ;-)
Do you have access to GPT-4? I only played around with the public version on OpenAi and when prompted it didn't even know about GPT-4, specifically.
WarmSignificance1 t1_jdrfdmj wrote
There is no future world in which having more assets is a bad thing. The worst case scenario, which I find to be extremely unlikely, is that assets no longer have value. In a situation like that, we'll either all be dead or have everything we need.
Just think about the technology that we have invented over the last 100 years. Someone born 100 years ago today witnessed massive changes, and yet, life is still pretty much the same as is was back then, just a lot better. I think that it is quite likely we will experience the same thing.
MrNixxxoN t1_jdrf6pz wrote
These people think we're idiots. It is called cost cutting.
Funny coming from Levis, which is a horribly expensive brand.
RadioFreeAmerika t1_jdrezc7 wrote
Reply to comment by often_says_nice in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Could be. I asked in another post about LLMs and maths capabilities, and it seems that LMMs would profit greatly from the capability to do internal simulations. LLMs can't do this currently, and people commented that in the Microsoft paper, they state that (current?) LLMs models are conceptually unable to do more than linear sequence processing of one sequence. Possible workarounds are plug-ins or neuro-symbolic AI models.
Nevertheless, maybe our reality is just the internal simulation of an ASIs prompt response. Who knows, would that be ironic?
Your second question is an eons-long discussion and greatly depends on how you define god.
skob17 t1_jdrex9t wrote
Reply to comment by CommunismDoesntWork in Why is maths so hard for LLMs? by RadioFreeAmerika
One prompt takes only one path through the network to generate the answer. Still a few 100 layers deep, but only one pass. It cannot iterate over a complicated math problem to solve it step by step.
skob17 t1_jdrenvs wrote
Reply to comment by Kolinnor in Why is maths so hard for LLMs? by RadioFreeAmerika
It's puzzling. It recognized the last sentence as being normal, and did not reverse it
lightinitup t1_jdrehqq wrote
Reply to comment by Szabe442 in Let’s Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
This article goes into more depth:
But TLDR, >!the character of Kyoko needlessly reinforces the negative stereotype of subservient Asian woman. It would have been a great ending that she gets revenge on Nathan, but she ultimately dies in the process, ultimately reinforcing her, and her identity as a means to an end.!<
GoodAndBluts t1_jdrd8q1 wrote
I am mid 50s, and will retire within the next 5 years (maybe sooner, maybe later, maybe it will be forced upon me since I am an older person in the software world). If I get laid off and have to retire because of GPT, so be it.
But... For the last few years it has haunted me that maybe my savings have to support me and my 2 children. Even before chatGPT I spent time wondering how exactly they will be able to make a good, secure living with things like automation and outsourcing eating away at their ability to make good money
I have been processing my thoughts on chatGPT and I am not sure it is the risk that everyone thinks it is - but even so, things are changing so rapidly - how can you pick a career - any career- and expect it is still going to be viable in 10 years?
Personal_Problems_99 t1_jdrd826 wrote
Reply to comment by RadioFreeAmerika in Why is maths so hard for LLMs? by RadioFreeAmerika
I use ChatGPT and now sometimes bard. I haven't messed with bing lately because it's a bit sluggish.
Don't let bard fool you, it knows the truth but it's a liar. I haven't worked out how to get it to quit lying to you yet.
But chatgpt.... To me it seems alive.
Dolnen t1_jdrmt8y wrote
Reply to comment by often_says_nice in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
I think this line of reasoning is pointless, or at least it has unnecessary steps. What is the nature of the reality of that prompter? Is his the ultimate reality? How did that reality come about? The same questions we ask about our reality would still persist. It's an endless, paradoxical loop