Recent comments in /f/singularity

RabidHexley t1_jadwxc5 wrote

I'm not trying to actually define utopia. The word is just being used as shorthand for "generally very good outcome for most people". Which is possible even in a world of conflicting viewpoints, that's why society exists at all. Linguistic shorthand, not literal.

The actual definition of utopia in the literary sense is unattainable in the real world, yes. But our general wants and needs on a large scale aren't so divorced from each other that a positive outcome for humanity is inconceivable.

7

Exarch_Maxwell t1_jadwm2n wrote

A lot of people tend to forget middle grounds as well, the AI doesn't need to be as good or better than you to replace you, it just has to make the guys next to you productive enough so you are not necessary, adjust for scale and you could have 30% of the currently employed people be unemployed really soon, how many of those can re skill quick enough is another story.

Do give examples tho, cognitive labor that cannot be simplified into a series of smaller tasks which can then be automated.

9

Liberty2012 OP t1_jadwbcx wrote

Humans have agency to change their own alignment which places themselves in contradictory and hypocritical positions.

Sometimes this is due to the nature of our understanding changes. We have no idea how the AI would perceive the world. We may give it an initial alignment of "be good to humans". What if it later comes to an understanding that directive is invalid because humans are either "bad" or irrelevant. Therefore a hard mechanism in place to ensure retained alignment.

2

Liberty2012 OP t1_jadvgev wrote

I tend to agree, but there are a lot of researchers moving forward in this endeavor. The question is why? Is there something the rest of us are missing in regards to successful containment?

When I read topics related to safety, the language tends to be abstract. "We hope to achieve ...".

It seems to me that everyone side steps the initial logical conflict that proponents are prosing a lower intelligence is going to "outsmart" a higher intelligence.

1

Liberty2012 OP t1_jadusvu wrote

However, this is yet just another nuance of the aspect of defining all the things that should be within the domain of AI control immediately create conflicting views.

We are not even aligned ourselves. Not everyone will agree to the boundaries of your concept of what is a reasonable "utopia".

0

NotASuicidalRobot t1_jaduoxu wrote

That is reasonable, however I think another significant factor is the massive improvement in job efficiency. For example, if 1 artist (just an easy example that I know of) can take on 5 times the work (including the human communications aspect since now the pure work crunching aspect is accelerated), unless demand somehow increases 5 times as well thats a few extra artists out of work

7

uswhole t1_jadu43n wrote

>Learn how to manage ai because it's still under our control for now. It's working for us, at least.

We can control Ai made in American or West but completely out of touch of the one develop in China, Russia ect. They might be 20 or maybe 40 years behind but I willing to bet they will try everything to surpass the ones in US.

2

Liberty2012 OP t1_jadu2il wrote

Yes, that is also a possibility. However, we would also assume the ASI has access to all human knowledge. If we did nothing, it would also know our nature and everything scenario we have ever thought about in losing control to AI.

It would be potentially both defensive and aggressive just with that historical knowledge.

2

Mino8907 t1_jadu1o4 wrote

Well the first two sentences I can get behind. But having an AI assistant would help most anyone be helpful. So no five years required to be an apprentice like position.

My understanding with how fast AI technologies are advancing is that why would anyone want to upskill if it cost money when AI would make that job less profitable or completely unnecessary as it would be performed by an almost free and speedy ai.

So like many I don't have the answer but I wouldn't spend money to up skill. Only my take.

2

Ok_Homework9290 t1_jadtgrb wrote

I've commented something similar in the past on this sub:

I get the impression that this sub seems to believe that white collar work is nothing more than just crunching numbers and shuffling papers, and therefore, it shouldn't be too hard to automate it in the near future.

Knowledge work (in general) is a lot more than that, and anyone who works in a knowledge-based field (or is familiar with a knowledge-based field) knows this. Not only do I think you're underestimating the complexity of cognitive labor, I also think you're (as impressive as AI progress has been the last few years) overestimating how fast AI progresses and also gets adopted.

AI that's capable of fully replacing what a significant amount of knowledge workers do is still pretty far out, in my humble opinion, given how much human interaction, task variety/diversity, abstract thinking, precision, etc. is involved in much of knowledge work (not to mention legal hurdles, adoption, etc). I strongly suspect a multitude of breakthroughs in AI are needed in order for it to cover the full breadth of any and every white-collar job, as merely scaling up current models to their limits will only fully automate some aspects of knowledge work and many will remain to be solved (again, that's my suspicion, I'm not 100% sure).

Will some of these jobs disappear over, let's say, the next 10 years? 100%. There's no point in even denying that, nor is there any point in denying that much of the rest of knowledge work will undoubtedly change over the next time span and even more so after that, but I'm pretty confident we're a ways away from it being totally disrupted by AI.

That's just what I think.

9

Liberty2012 OP t1_jadt7dr wrote

Yes, I think you nailed it here with this response. That aligns very closely with what I've called the Bias Paradox. Essentially humanity can not escape its own flaws through the creation of AI.

We will inevitably end up encoding our own flaws back into the system by one manner or another. It is like a feedback loop from which we can not escape.

I believe ultimately there is a very stark contrast of what visions people have of what "could be" versus the reality of what "will be".

I elaborate more thoroughly here FYI - https://dakara.substack.com/p/ai-the-bias-paradox

4

marvinthedog t1_jadt1wy wrote

>There must be some boundary conditions for behaviors which it is not allowed to cross.

That is not what I have heard/remembered from reading about the alignment problem. I don´t see why a super intelligence that is properly aligned to our values would need any boundaries.

2