Recent comments in /f/singularity

turnip_burrito t1_jd5ud7f wrote

Here's what we'll do imo:

Just give it some set of morals (western democratic egalitarian most likely). The philosophical considerations will eventually all conclude "well we have to do something" and then they'll just give it morals that seem "good enough". Given the people developing the AI, it makes sense that it will adhere to their views.

4

Last_Jury5098 t1_jd5qxyn wrote

Nice blog thx for posting it!.

"Do you need to have a sense of morality in order to see inequity, or would a purely rational AI also see it?"

The AI will see it. The question is if it will see it as a problem.

A rational AI could see it as a problem but his depends on the main goals that the system tries to achieve.

For example it could conclude that the world could reach a higher economic output if inequity was lower or higher.

And then you get into the alignment problem. Maximizing economic output cant be the only objective,we have to make sure it wont kill us in the process and so on.

And then you get in the situation Where AI will be given a set of goals and a set of restrictions. A set of different parameters,reflecting a wide range of issues that are important to humans. And the system beeing given the restriction to not cross those bounderies. What a rational AI will conclude about inequality,based on those goals and restrictions,is impossible to predict. The only way to find out is to run it and see what it tells us.

A sense of morality could maybe be coded into the AI. It would be part of this set of restrictions. We can feed it human morals,but those morals in the end are arbitrary. And what AI will do when one moral consideration conflicts with a different one is again difficult to predict.

This isnt really what we want from AI either i think. We want it to come to the "right" conclusion by itself. Without it beeing led to the "right" conclusion artificially and arbitrarily.

In an ideal situation we want to feed it as less rules as possible. Because every aditional rule will make the system more complicated and unpredictable. By creating tension between different rules and objectives. We then we have to feed it priority,or create a system that allows it to determine priority. Which in the end is arbitrary again.

There is one hypothetical example that i thought of that is very hard to solve for AI. It gets down to the core of the problem.

We have a self driving car. The car recognizes that a crash is inevitable and it has 2 options. Option one leads to severe harm for the single driver of the car. And option 2 leads to severe harm of 2 bystanders. How do we get AI to ever chose between those 2 options.

And those 2 options are what the alignment problem comes down to in the end. Even an AI that has nothing but the benefit of humanity as a goal will have to make choices between the interests of individual humans,or groups of humans.

This is an arbitrary choice for humans,but how can AI make such an arbitrary choice? The only way for AI to solve this by itself is by giving it certain goals. Which brings me back to the start of this post.

14

ActuatorMaterial2846 t1_jd5oap8 wrote

I think fast, adaptive, un-aligned. I think the choice by openAI to go for profit shows a level of hubris amongst the creators in the sector.

It just seems so arrogant to close their research off and then spouting some pseudo intellectual drivel about alignment and the human condition in order to justify it, as if only they can find solve the mystery.

If it was to be human aligned, it needs to be open, where academics, intellectuals, and the general public can see the direction its heading in, not a small group of technocrats who think they know best for society.

5

NoidoDev t1_jd5k8qw wrote

Anime:

Combined tags female+robot: https://www.anime-planet.com/characters/all?gender_id=2&include_tags=212

Thread with a list of shows: https://alogs.space/robowaifu/res/18711.html

Most related, popular and high quality, if you're not just into the good robot girl trope: "Blame!" and "Vivy: Fluorite Eye's Song"

Non-Anime:

  • Terminator SCC
  • Raised by Wolves
0

A_Human_Rambler t1_jd5ikt2 wrote

I really like your approach.

I think it will be a middle range of each. Leaning towards slow, adaptive and aligned.

The biggest issue I see is an antagonistic arms race between nations. As long as the governments can create enough collaboration for adaptive policies, the AI should remain aligned.

3

bobbib14 t1_jd5h5m2 wrote

i think none of the billionaires in the United States actually run their companies anymore. leave the CEOs in charge. leave capitalism. leave them all a few billion, fine. but redistributing the excess - invest in infrastructure, climate, education. dont need to go full commie. i am sure “good” AI could find a balance better than me.

18

Spreadwarnotlove t1_jd5gemh wrote

Not in this instance. I mean seriously. You know what happens when you take the companies from the people that built it and give it to randos? The same thing that happened everytime it was tried before. The randos crash it to the ground since they don't have a clue how to run it.

−17