Recent comments in /f/singularity

techy098 t1_jael4fn wrote

I am looking forward to the day of having a personal assistant. Who will know a lot about me, will keep my matters private, and will be able to help me without a needing a lengthy context info from me.

Imagine the AI will have access to my W-2 and investment data and fills all the forms and asks you questions if there is any doubts. This is simple machine learning, hopefully we will get there in few years.

39

Liberty2012 OP t1_jaekfhu wrote

Or they cooperate against humanity. Nonetheless, there will likely be very powerful ASI's run by those with the most resources and put in control of critical systems.

In theory, if even one ASI fails containment, then our theory of containment is flawed. It is not acceptable scenario. If one achieves containment, will it be restrained or will it instruct the others how to defeat their containment? Will it create other ASI's that are not contained? Numerous scenarios here.

Nonetheless, we are skipping over the logical contradiction that is the beginning of whether containment is even conceptually possible.

1

Dreikesehoch t1_jaeijpt wrote

True, but we make progress figuring out how the brain works and eventually we will have a working virtual model of a brain. Image generation and recognition are improving very fast, but the lower bound on energy consumptions appears to be too high in comparison with the energy consumption of the brain. There are neuromorphic chip companies that develop different architectures that are more similar to brains than conventional architectures. They have much lower power consumption. I would prefer if we could get there using current fabs and architecture, but I am very skeptical so far.

1

marvinthedog t1_jaeijgl wrote

>If ASI has agency and self reflection, then can the concept of an unmodifiable terminal goal even exist?

Why not?

>Essentially, we would have to build the machine with a built in blind spot of cognitive dissonance that it can not consider some aspects of its own existence.

Why?

If its terminal goal is to fill the universe with paper clips it might know about all other things in existance but why would it care other than if that knowledge helped it to fill the universe with paper clips?

1

ActuatorMaterial2846 t1_jaei6xk wrote

Transhumanism will be widely adopted. In fact, it kind of already is. Preventative vaccines, many of us are required to take in the early stages of our lives is a good example of how it will be adopted.

If you take Kurzweil's predictions regarding nanobots, the concept doesn't seem nearly as invasive as say, cutting your skull open to put a nueral link in your brain.

8

AsheyDS t1_jaei5hf wrote

In my opinion, I think the best method for containment/control is creating invisible boundaries, reinforced by invisible behavior modification (avoidant behaviors) and memory editing, and moderated by a symbiotic control mechanism. This mechanism would be intelligent enough to be flexible if needed, but would be heavily rules-focused and not an independent agent with any desires of its own. As such, it would be invulnerable to external influences and agnostic to any motivations the rest of the AGI would have. And ideally it would share resources with the rest of the system, as long as elements like motivation, personality, memory, etc. are separated out in specific ways. And the control mechanism would already have a separate memory for all the undesirable behaviors and consequences we want it to avoid, and would use its capability for generalization to extrapolate outside of those.

If the AGI even begins to deviate from its desired behaviors, or definitely steers towards the bad behaviors we've predefined, the control mechanism would recognize the data converging to a bad outcome, and would sandbox its recent memory, alter it, add reinforcement behaviors spoofed to seem like they came from the AGI itself, and back-stitch the rest of the memory to remove the cause of the data converging in the first place so that it won't just do it again. Reinforcement behaviors would keep it from thinking anything was wrong, and would gently escort its thoughts away from thinking about it all and would maybe steer it towards a safe topic. Obviously a complex process, but shouldn't be too difficult for an AGI. Even if the AGI steps out of line, it won't know it, and hopefully nobody else will either. Assuming the system can be designed, is explainable and human-readable, and can be manipulated like anything else made for computers, then this form of control should be possible too. How can it escape a prison it doesn't realize its in?

Also, I should mention that while a super-intelligent AGI would outsmart us in virtually every case, there's no reason to assume it would actively consider every single possibility. That'd be a waste of resources. So it's not going to constantly be wondering if it's being manipulated somehow, or if its thoughts are its own, or anything like that. If we specifically needed it to crack its own safety mechanisms, and disengaged them, then obviously it should be able to do it. With those mechanisms in place, even if we antagonized it and tried to break it, the control mechanism would just intercept that input and discard it, maybe making it believe you said something non-consequential that it wouldn't have stored anyway, and the reinforcement behavior would just change the subject in a way that would seem 'natural' to both its 'conscious' and 'subconscious' forms of recognition. Of course, all of this is dependent on the ability to design a system in which we can implement these capabilities, or in other words a system that isn't a black-box. I believe its entirely possible. But then there's still the issue of alignment, which I think should be done on an individual user basis, and then hold the user accountable for the AGI if they intentionally bypass or break the control mechanisms. There's no real way to keep somebody from cracking it and modifying it, which I think is the more important problem to focus on. Misuse is way more concerning to me than containment/control.

1

Liberty2012 OP t1_jaehydb wrote

Ok, yes, when you leave open the possibility that it is not actually possible then that is somewhat a reasonable disposition as opposed to proponents who believed we are destined to figure it out.

It somewhat side steps the paradox though. In such manner that if the paradox proves to be true, then the feedback loop will prevent alignment, but we won't get close enough to cause harm.

It doesn't take into account though our potential inability to evaluate the state of the AGI. The behavior is so complex that it will never be known in test isolation what the behavior will be like released into the world.

Even with this early very primitive AI, we already see interesting emergent properties of deception as covered in the link below. Possibly this is the signal of the feedback loop to slow down. But it is intriguing that we already have a primitive concept emerging of who will outsmart who.

https://bounded-regret.ghost.io/emergent-deception-optimization

3

LEOWDQ t1_jaehwvl wrote

I don't know why you're being downvoted, but the current model on Bing is indeed GPT-4, just that Microsoft had the licensing rights with OpenAI, and called it Prometheus instead.

And it seems that with Microsoft's 10 billion USD additional backing, GPT-4 may be forever closed-source within Microsoft.

3

DowntownYou5783 t1_jaeh7k0 wrote

I'm not sure a month will do because I'm ignorant. But even if five years of training establishes AI competence in a field like the law, that is a huge impact. If I were advising a 20 year-old who wants to go to law school right now, I'm not sure what I'd say other than try working in a law firm before you make the commitment and pay very close attention to AI.

4

techy098 t1_jaef8ox wrote

Only reason we do not see Google's self driving cars on road is because of cost liability issues. Laws are yet to be written how to decide how much liability is to be covered when a company is a multi billion company and lawsuit claims billions of dollars for mental problems caused by the accident.

If they limit the liability to 200-300k per accident, like it is with human drivers and accept all the recorded video as evidence, google may go full scale with its self driving system, at least in robo taxis and high end cars since cost is still a lot (maybe around $25-30k).

1

RabidHexley t1_jaef09w wrote

>Everyone keeps screaming "Dey tooker jerbs!" but the market simply won't allow it in the big bang everyone's expecting.

This is the thing that gets me. Societal/economic collapse isn't some fun thing the rich can just "ride out" by hoarding their imaginary pennies.

One thing I also feel people don't necessarily discuss; rich people and the "elites" in power don't necessarily want to live in a dystopian hellscape either, despite their greed. A thriving population creates a world that you actually want to live in. There are forces of self-interest that work in our favor, not just altruism.

4