Recent comments in /f/singularity
type102 t1_jaevb1x wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
The solution is to LIE on every resume you send out. (you know like everyone who is paid well [*cough*managers] already does.)
EnomLee t1_jaeu8ee wrote
Reply to comment by Iffykindofguy in Digital Molecular Assemblers: What synthetic media/generative AI actually represents, and where I think it's going | Even now, people misunderstand just how transformative generative AI really is. Those who do understand, however, are too caught up in techno-idealism to see the likely ground truth by Yuli-Ban
Can't say I'm interested in defending CEO worship, so no argument there. The Zuckerbergs, Bezoses and Musks of the world are not our friends.
naivemarky t1_jaeu4ts wrote
Reply to comment by techy098 in (Long post) Will the GPT4 generation of models be the last "highly anticipated" by the public? by AdditionalPizza
Assistant is almost here and it's going to be awesome. Everyone will have one as soon as it rolls out. Plus it's not the device itself that will be anything special (it's basically as complex as headset), it's the processing power in a server center in the cloud.
AdditionalPizza OP t1_jaetm35 wrote
Reply to comment by techy098 in (Long post) Will the GPT4 generation of models be the last "highly anticipated" by the public? by AdditionalPizza
I hope we're there in less than a "few" years, but we'll see. Once hallucinations are tempered enough I don't see why we wouldn't have access to that.
Liberty2012 OP t1_jaetcvy wrote
Reply to comment by Surur in Is the intelligence paradox resolvable? by Liberty2012
Conceptually yes. However, human children sometimes grow up to not adopt the values of their parents and teachers. They change throughout time.
We have a conflict in that we want AGI/ASI to be humanlike, but not human like at the same time under certain conditions.
ccnmncc t1_jaet8sy wrote
Reply to comment by drsimonz in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
I understand what you’re saying. We’ve developed methods and materials that have facilitated (arguably, made inevitable) our massive population growth.
We’ve taught ourselves how to wring more out of the sponge, but that doesn’t mean the sponge can hold more.
You caught my drift, though: we are overpopulated - whether certain segments of society recognize it or not - because on balance we use technology to extract more than we use it to replenish. As you note, that’s unsustainable. Carrying capacity is the average population an ecosystem can sustain given the resources available - not the max. It reflects our understanding of boom and bust population cycles. Unsustainable rates of population growth - booms - are always followed by busts.
We could feasibly increase carrying capacity by using technology to, for example, develop and implement large-scale regenerative farming techniques, which would replenish soils over time while still feeding humanity enough to maintain current or slowly decreasing population levels. We could also use technology to assist in the restoration, protection and expansion of marine habitats such as coral reefs and mangrove and kelp forests. Such applications of technology might halt and then reverse the insane declines in biodiversity we’re witnessing daily. Unless and until we take such measures (or someone or something does it for us), it’s as if we’re living above our means on ecological credit and borrowed time.
CypherLH t1_jaesy8l wrote
Reply to comment by Dreikesehoch in AI technology level within 5 years by medicalheads
​
I get what you are saying but not sure what the basis for skepticism right now is. Things are developing INSANELY fast since early last year; its hard to imagine things developing any faster and more impressively than they did and still are. I guess you can assume that we're close to some upper limit but I don't see a basis for assuming that.
MacacoNu t1_jaesgou wrote
Reply to comment by Lawjarp2 in Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
Attention is all you need
Ortus14 t1_jaes274 wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
Containment is not possible. If it's outputting data (is useful to us), then it has a means of effecting the outside world and can therefore escape.
The Alignment problem is the only one that needs to be solved before ASI, and it has not been solved yet.
Sea_Emu_4259 t1_jaeroz1 wrote
Reply to Autonomous drones use AI and computer vision to harvest fruits and veggies. In last year's demo, they only flew one drone now they can fly an entire fleet. In 5 years' time it could become truly impressive. by Dalembert
I guess I will have a UberEat Drone delivery at my window of fresh fruits picked 10minutes lol
AdamAlexanderRies t1_jaeqhy5 wrote
Reply to comment by Emory_C in Singularity claims its first victim: the anime industry by Ok_Sea_6214
Oops! Let's clarify. First, I agree with you that AGI is not machine learning. Here's how I use the terms:
AGI (Artificial General Intelligence) - entity with cognitive abilities equal to or better than any given human, across all domains.
ML (Machine Learning): this is how modern AI models are trained, typically in the form of neural nets, attention models, tokenized vectors, and lots of data stirred in a cauldron of TPUs. However we train AGI will be a form of ML (maybe one not developed yet), but the term catches all the ways we've been training models for the last decade or so. Maybe all imaginable AI training techniques are technically ML, but I use it to refer specifically to the tech underlying the recent exciting batch - Stable Diffusion (DALL-E, Midjourney), Large Language Models (ChatGPT, New Bing, LaMDA).
Does that work for you?
> at that point, do you think the AI will even care about making creative content for humans? > > > > It’d be like Scorsese deciding to make a movie exclusively for dogs. Why would he?
When you say "the AI" here, what do you mean exactly? What sorts of traits does that kind of AI have?
> ML is not creative or intelligent. It still needs human direction.
Creativity and intelligence are here already, to a limited extent. Generative AIs are creating in the sense that it's not just collage or parroting. The process is ambivalent to understanding completely novel combinations of ideas, and its outputs can vary to match. It's a worse poet than Shakespeare, a worse historian than a tenured professor, a worse novelist than Tolkien, a worse programmer than Linus, a worse physicist than Einstein, and so on, but it's demonstrating actual intellect in all these domains and more, better than most gradeschoolers and some grown adults.
It does not still need human direction, and that's unrelated to its cognitive powers (creativity, intelligence, etc.) anyway. ChatGPT is an implementation of GPT that requires human direction, but that's a design choice, not an inherent limitation. They wanted a chatbot. If they wanted it to exhibit autonomous behaviour via some complex function to decide for itself what to read, when to reply, and where to post, they could've done that too.
FomalhautCalliclea OP t1_jaepa7r wrote
Reply to comment by RabidHexley in The XIXth and the XXIIth century: about the ambient pessimism predicting a future of inequality and aristocratic power for the elites arising from the singularity by FomalhautCalliclea
Well put (same for the comment above).
People that think that rich people can ride the collapse remind me of some XVIIIth century economists that would make "robinsonnades", meaning that they would create fictional stories that sound like Robinson Crusoe, with economical agents starting in a non existing pure land with no previous inhabitant, out of nothingness, completely ignoring social structures, anthropology, etc.
V_Shtrum t1_jaep48h wrote
Reply to comment by RabidHexley in Is style the next revolution? by nitebear
All of what you say is true, the circle I'm trying to square is that, on the one hand, people often find work dull and unfulfilling etc. On the other, it's been widely observed that unemployment and underemployment correlate with all sorts of negative outcomes such as crime, poor mental and physical health etc. I'm not sold on the idea that more generous unemployment benefits (AKA UBI) alone are going to solve that*
I was convinced by Victor Frankl's book 'Man's Search For Meaning' that (most) people aren't at their core hedonistic, what they really want is meaning in their lives. Many people get this from work, others from having a family (and so on). I think that if AI were to eliminate work, it would eliminate a lot of the meaning that a lot of people get in their lives, and something needs to replace that. If nothing positive fills that vacuum, then something negative will.
EDIT:
I would also add, as you intimate, that the death of meaningful work predates AI, and that the gross dissatisfaction that many people feel at work (and in their lives) is a consequence of this. I don't know what the solution is.
*There will of course be a subsection of people who will be perfectly happy on UBI.
FomalhautCalliclea OP t1_jaeolb6 wrote
Reply to comment by Quealdlor in The XIXth and the XXIIth century: about the ambient pessimism predicting a future of inequality and aristocratic power for the elites arising from the singularity by FomalhautCalliclea
It depends when and where:
On the one hand, some ancient societies were quite egalitarian compared to the XIXth century (Harappa civilization, Tlaxcallan pre-colombian civilization, Sassanid empire under Khosrow I, etc).
On the other hand, some were much less egalitarian, in a dystopic manner almost (medieval serfdom societies).
The XIXth century was a progress on the precedent century, with many countries abolishing serfdom (1789 for the earliers like France, 1861 for the laters like Russia) and slavery (1807-1831 in the UK, 1848 in France, 1865 in the US).
There is also a continuity between centuries. There is even a saying: "the XVIIIth century asked the questions (with enlightenment), the XIXth century brought the answers".
CertainMiddle2382 t1_jaeo186 wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Hmm, now?
“Reskilling” is only really possible before mid 30s imo.
Surur t1_jaenmas wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
I believe the idea is that every action the AI takes would be to further its goal, which means the goal will automatically be preserved, but of course in reality every action the AI takes is to increase its reward, and one way to do that is to overwrite its terminal goal with an easier one.
Surur t1_jaen1h5 wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
> It doesn't take into account though our potential inability to evaluate the state of the AGI.
I think the idea would be that the values we teach the AI at the stage that is under our control will carry forward when it is no longer, much like we teach values to our children which we hope they will exhibit as adults.
I guess if we make sticking to human values the terminal goal we will get goal preservation even as intelligence increases.
LastInALongChain t1_jaemp8w wrote
Reply to comment by just-a-dreamer- in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
They probably shouldn't. At this point they should do entrepreneurship.
RabidHexley t1_jaemizx wrote
Reply to comment by turnip_burrito in Is style the next revolution? by nitebear
I think this would definitely be the case. We already like handcrafted items that machines can make just as well or significantly better. It's just a means of connecting with other people and the world around us. AI or machines being able to do it as well doesn't replace that dynamic.
[deleted] t1_jaemg6a wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
[deleted]
RabidHexley t1_jaemcgf wrote
Reply to comment by V_Shtrum in Is style the next revolution? by nitebear
People still act & work in the absence of a need to work as well, in the current world (i.e. people who can afford to retire early). People also take on additional tasks, hobbies, and trades in their lives that have no practical benefit.
Gardening, musical instruments, hiking, fan fiction, all manner of crafts. Most hobbies can take a lot of work and don't have a practical return. An AI (or a supermarket, amazon, midi software, etc.) being able to do something for you doesn't replace the desire to do and experience things yourself.
Many people's actual jobs already don't serve any practical function outside of the narrow scope of something like a corporate structure. Middle manager, bureaucrat, many accounting roles, and all of the people serving in support positions for these roles. Completely divorced from any fruit of labor besides a paycheck.
[deleted] t1_jaemc7n wrote
Reply to comment by Mino8907 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
[deleted]
Surur t1_jaem8nr wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
It is interesting to me that
a) its possible to teach a LLM to be honest when we catch it in a lie.
b) if we ever get to the point where we can not detect a lie (eg. novel information) the AI is incentivised to lie every time.
Liberty2012 OP t1_jael9bs wrote
Reply to comment by marvinthedog in Is the intelligence paradox resolvable? by Liberty2012
Because a terminal goal is just a concept we made up. It is just the premise for a proposed theory. It is essentially why the whole containment idea is of such complex concern.
If a terminal goal was a construct that already existed in the context of a sentient AI, then it is already a partially solved problem. Yes, you could still have the paperclip scenario, but it would be just a matter of having the right combination of goals. We don't really know how to prevent the AI from changing those goals, it is a concept only.
play_yr_part t1_jaevmye wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
next Tuesday probably.