Recent comments in /f/singularity

techhouseliving t1_jadsovw wrote

We're there its just not evenly distributed yet. Next generation in a few months is professor intelligence at super computer speed with the information of the entire Internet and the ability to create anything any artist or technician can, in seconds. In parallel, even. Self improving it's code.

Learn how to manage ai because it's still under our control for now. It's working for us, at least.

But we're super close to the singularity if you ask me and I work in this space. The speed is dizzying.

26

RabidHexley t1_jadsn49 wrote

Utopia in this context doesn't mean "literary" utopia. But the idea of a world where we've solved most or all of the largest existential problems causing struggle and suffering upon humanity as a whole (energy scarcity, climate catastrophe, resource distribution, slave labor, etc.) . Not all possible individual struggle.

That doesn't mean we've created a literal perfect world for everyone. But an "effective" utopia.

7

DragonForg t1_jadsgpo wrote

Why do anything if it will go away because you have to work/do something you dont want to do.

Every day when I have free time, I get mad because I know I will have to go back to work in a few hours. And I know being in grad school it will never change. 5 years of this, with only a small amount of breaks. Plus the added stress of needing to be better than everyone just to make it by when everyone else is smarter than you to begin with.

Its why I am so interested in AI because it means I dont need to be smart, I can just know and understand it from the get go. I can just relax and live my life but also still have the passion for chemistry (my focus) making new discovers without working or feeling obligated to work ~60 hours.

1

JVM_ t1_jads1v0 wrote

I don't think we can out-think the singularity. Just like a single human can't out-spin a ceiling fan, the singularity will be fast enough to be beyond humans containment attempts.

What happens next though? I guess we can try to build 'friendly' AI's that tend toward not ending society, but I don't think true containment can happen.

6

phaedrux_pharo t1_jadrn73 wrote

>By what mechanism do we think that will be achievable?

By "correctly" setting up the basic incentives, and/or integration with biological human substrates. Some ambiguity is unavoidable, some risk is unavoidable. One way to approach the issue is from the opposite direction:

What do we not do? Well, let's not create systems whose goals are to deliberately extinguish life on earth. Let's not create torture bots, let's not create systems that are "obviously" misaligned.

Unfortunately I'm afraid we've already done so. It's a tough problem.

The only solution I'm completely on board with is everyone ceding total control to my particular set of ethics and allowing me to become a singular bio-ASI god-king, but that seems unlikely.

Ultimately I doubt the alarms being raised by alignment folks are going to have much effect. Entities with a monopoly on violence are existentially committed to those monopolies, and I suspect they will be the ones to instantiate some of the first ASIs - with obvious goals in mind. So the question of alignment is kind of a red herring to me, since purposefully un-aligned systems will probably be developed first anyway.

9

Liberty2012 OP t1_jadrb81 wrote

I don't think utopia is a possible outcome. It is a paradox itself. Essentially all utopias become someone else's dystopia.

The only conceivable utopia is one designed just for you. Placed into your own virtual utopia designed for you own interests. However, even this is paradoxically both a utopia and a prison as in welcome to the Matrix.

2

TFenrir t1_jadrb3u wrote

I think if we can get a really good, probably sparsely activated, multimodal model that can do continual learning that shows transfer - ala Pathways, many white collar jobs are done.

Any system that has continual learning I think would also have short/medium/whatever term memory, and a context window that can handle enough at once that rivals what we can handle at any given time.

But the thing is I think that unlike biological systems, there are many different inefficient ways to get us there as well. A very dense model that is big enough, with a better fine tuning process might be all we need. Or maybe the bottle neck is currently really context, as in-context learning is quite powerful, what if we suddenly have an efficiency breakthrough with a Transformer 2.0 that can allow for context windows of 1 million tokens?

Also maybe we don't need multimodal per se, maybe a system that is trained on pixels would cover all bases.

7

3_Thumbs_Up t1_jadq63e wrote

There is an infinite multitude of ways history might play out, but they're not all equally probable.

The thing about the singularity is that its probability distribution of possible futures is much more polarized than humans are used to. Once you optimize hard enough for any utility curve you get either complete utopia or complete dystopia the vast majority of times. It doesn't mean other futures aren't in the probability distribution.

12

Liberty2012 OP t1_jadq1t1 wrote

Well, cage is simply metaphorical context. There must be some boundary conditions for behaviors which it is not allowed to cross.

Edit: I explain alignment in further detail in the original article. Mods removed it from original post, but hopefully it is ok to link in a comment. It was a bit much to put all in a post, but there was a lot of thought exploration on the topic.

https://dakara.substack.com/p/ai-singularity-the-hubris-trap

3

Liberty2012 OP t1_jadnx3q wrote

Certainly there is a spectrum of behavior for which we would deem allowable or not allowable. However, that in itself is an ambiguous set of rules or heuristics for which there is no clear boundary and presents the risk of leaks of control due to not well defined limits.

However, for whatever behavior we set within the unallowable, that must be protected such that it can not be self modified by the AGI. By what mechanism do we think that will be achievable?

4