Recent comments in /f/singularity

Liberty2012 OP t1_jaetcvy wrote

Conceptually yes. However, human children sometimes grow up to not adopt the values of their parents and teachers. They change throughout time.

We have a conflict in that we want AGI/ASI to be humanlike, but not human like at the same time under certain conditions.

1

ccnmncc t1_jaet8sy wrote

I understand what you’re saying. We’ve developed methods and materials that have facilitated (arguably, made inevitable) our massive population growth.

We’ve taught ourselves how to wring more out of the sponge, but that doesn’t mean the sponge can hold more.

You caught my drift, though: we are overpopulated - whether certain segments of society recognize it or not - because on balance we use technology to extract more than we use it to replenish. As you note, that’s unsustainable. Carrying capacity is the average population an ecosystem can sustain given the resources available - not the max. It reflects our understanding of boom and bust population cycles. Unsustainable rates of population growth - booms - are always followed by busts.

We could feasibly increase carrying capacity by using technology to, for example, develop and implement large-scale regenerative farming techniques, which would replenish soils over time while still feeding humanity enough to maintain current or slowly decreasing population levels. We could also use technology to assist in the restoration, protection and expansion of marine habitats such as coral reefs and mangrove and kelp forests. Such applications of technology might halt and then reverse the insane declines in biodiversity we’re witnessing daily. Unless and until we take such measures (or someone or something does it for us), it’s as if we’re living above our means on ecological credit and borrowed time.

1

CypherLH t1_jaesy8l wrote

​

I get what you are saying but not sure what the basis for skepticism right now is. Things are developing INSANELY fast since early last year; its hard to imagine things developing any faster and more impressively than they did and still are. I guess you can assume that we're close to some upper limit but I don't see a basis for assuming that.

1

Ortus14 t1_jaes274 wrote

Containment is not possible. If it's outputting data (is useful to us), then it has a means of effecting the outside world and can therefore escape.

The Alignment problem is the only one that needs to be solved before ASI, and it has not been solved yet.

6

AdamAlexanderRies t1_jaeqhy5 wrote

Oops! Let's clarify. First, I agree with you that AGI is not machine learning. Here's how I use the terms:

AGI (Artificial General Intelligence) - entity with cognitive abilities equal to or better than any given human, across all domains.

ML (Machine Learning): this is how modern AI models are trained, typically in the form of neural nets, attention models, tokenized vectors, and lots of data stirred in a cauldron of TPUs. However we train AGI will be a form of ML (maybe one not developed yet), but the term catches all the ways we've been training models for the last decade or so. Maybe all imaginable AI training techniques are technically ML, but I use it to refer specifically to the tech underlying the recent exciting batch - Stable Diffusion (DALL-E, Midjourney), Large Language Models (ChatGPT, New Bing, LaMDA).

Does that work for you?

> at that point, do you think the AI will even care about making creative content for humans? > > > > It’d be like Scorsese deciding to make a movie exclusively for dogs. Why would he?

When you say "the AI" here, what do you mean exactly? What sorts of traits does that kind of AI have?

> ML is not creative or intelligent. It still needs human direction.

Creativity and intelligence are here already, to a limited extent. Generative AIs are creating in the sense that it's not just collage or parroting. The process is ambivalent to understanding completely novel combinations of ideas, and its outputs can vary to match. It's a worse poet than Shakespeare, a worse historian than a tenured professor, a worse novelist than Tolkien, a worse programmer than Linus, a worse physicist than Einstein, and so on, but it's demonstrating actual intellect in all these domains and more, better than most gradeschoolers and some grown adults.

It does not still need human direction, and that's unrelated to its cognitive powers (creativity, intelligence, etc.) anyway. ChatGPT is an implementation of GPT that requires human direction, but that's a design choice, not an inherent limitation. They wanted a chatbot. If they wanted it to exhibit autonomous behaviour via some complex function to decide for itself what to read, when to reply, and where to post, they could've done that too.

1

FomalhautCalliclea OP t1_jaepa7r wrote

Well put (same for the comment above).

People that think that rich people can ride the collapse remind me of some XVIIIth century economists that would make "robinsonnades", meaning that they would create fictional stories that sound like Robinson Crusoe, with economical agents starting in a non existing pure land with no previous inhabitant, out of nothingness, completely ignoring social structures, anthropology, etc.

2

V_Shtrum t1_jaep48h wrote

All of what you say is true, the circle I'm trying to square is that, on the one hand, people often find work dull and unfulfilling etc. On the other, it's been widely observed that unemployment and underemployment correlate with all sorts of negative outcomes such as crime, poor mental and physical health etc. I'm not sold on the idea that more generous unemployment benefits (AKA UBI) alone are going to solve that*

I was convinced by Victor Frankl's book 'Man's Search For Meaning' that (most) people aren't at their core hedonistic, what they really want is meaning in their lives. Many people get this from work, others from having a family (and so on). I think that if AI were to eliminate work, it would eliminate a lot of the meaning that a lot of people get in their lives, and something needs to replace that. If nothing positive fills that vacuum, then something negative will.

EDIT:

I would also add, as you intimate, that the death of meaningful work predates AI, and that the gross dissatisfaction that many people feel at work (and in their lives) is a consequence of this. I don't know what the solution is.

*There will of course be a subsection of people who will be perfectly happy on UBI.

1

FomalhautCalliclea OP t1_jaeolb6 wrote

It depends when and where:

On the one hand, some ancient societies were quite egalitarian compared to the XIXth century (Harappa civilization, Tlaxcallan pre-colombian civilization, Sassanid empire under Khosrow I, etc).

On the other hand, some were much less egalitarian, in a dystopic manner almost (medieval serfdom societies).

The XIXth century was a progress on the precedent century, with many countries abolishing serfdom (1789 for the earliers like France, 1861 for the laters like Russia) and slavery (1807-1831 in the UK, 1848 in France, 1865 in the US).

There is also a continuity between centuries. There is even a saying: "the XVIIIth century asked the questions (with enlightenment), the XIXth century brought the answers".

1

Surur t1_jaenmas wrote

I believe the idea is that every action the AI takes would be to further its goal, which means the goal will automatically be preserved, but of course in reality every action the AI takes is to increase its reward, and one way to do that is to overwrite its terminal goal with an easier one.

2

Surur t1_jaen1h5 wrote

> It doesn't take into account though our potential inability to evaluate the state of the AGI.

I think the idea would be that the values we teach the AI at the stage that is under our control will carry forward when it is no longer, much like we teach values to our children which we hope they will exhibit as adults.

I guess if we make sticking to human values the terminal goal we will get goal preservation even as intelligence increases.

1

RabidHexley t1_jaemizx wrote

I think this would definitely be the case. We already like handcrafted items that machines can make just as well or significantly better. It's just a means of connecting with other people and the world around us. AI or machines being able to do it as well doesn't replace that dynamic.

3

RabidHexley t1_jaemcgf wrote

Reply to comment by V_Shtrum in Is style the next revolution? by nitebear

People still act & work in the absence of a need to work as well, in the current world (i.e. people who can afford to retire early). People also take on additional tasks, hobbies, and trades in their lives that have no practical benefit.

Gardening, musical instruments, hiking, fan fiction, all manner of crafts. Most hobbies can take a lot of work and don't have a practical return. An AI (or a supermarket, amazon, midi software, etc.) being able to do something for you doesn't replace the desire to do and experience things yourself.

Many people's actual jobs already don't serve any practical function outside of the narrow scope of something like a corporate structure. Middle manager, bureaucrat, many accounting roles, and all of the people serving in support positions for these roles. Completely divorced from any fruit of labor besides a paycheck.

3

Liberty2012 OP t1_jael9bs wrote

Because a terminal goal is just a concept we made up. It is just the premise for a proposed theory. It is essentially why the whole containment idea is of such complex concern.

If a terminal goal was a construct that already existed in the context of a sentient AI, then it is already a partially solved problem. Yes, you could still have the paperclip scenario, but it would be just a matter of having the right combination of goals. We don't really know how to prevent the AI from changing those goals, it is a concept only.

1