Recent comments in /f/singularity

Surur t1_jaeezsj wrote

I think the RL-HF worked really well because the AI is basing its judgement not on a list of rules, but the nuanced rules it learnt itself from human feedback.

Just like most AI things, we can never encode strictly enough all the elements which guide our decisions, but using neural networks we are able to black-box it and get a workable system that has in some way captured the essence of the decision-making process we use.

2

Liberty2012 OP t1_jaeezez wrote

Thanks, some good points to reason about!

Yes, this is somewhat the concept of evolving AGI in some competitive manner where we play AGIs against each other to compete for better containment.

There are several challenges, we don't really understand intelligence and at what point AI is potentially self aware. A self aware AI could potentially realize that the warden is playing the prisoners against each other and they could coordinate to deceive the guards so to speak.

And yes the complexity of the rules, however they are created, can be very problematic. Containment is really an abstract concept. It is so difficult to define what would be the boundaries and turn them into rules which will not have vulnerabilities.

Then ultimately, how can we ever know if the ASI has agency and is capable of self reflection that it will not eventually figure out how to jail break itself.

2

techy098 t1_jaeehsd wrote

I strongly disagree with this idea. Just because mass lay offs are not happening because of AI taking over white collar jobs does not mean it won't be a reality in 5 years.

Even if AI starts replacing workers in 7-10 years a college graduate has to worry about it, otherwise 4-5 years of college with a ton of debt is not going to serve them well.

4

Surur t1_jaedwk5 wrote

Sure, but you are missing the self-correcting element of the statement.

Progress will stall without alignment, so we will automatically not get AGI without alignment.

An AGI with a 1% chance of killing its user is just not a useful AGI, and will never be released.

We have seen this echoed by OpenAI's recent announcement that as they get closer to AGI they will become more careful about their releases.

To put it another way, if we have another AI winter, it will be because we could not figure out alignment.

2

AdditionalPizza OP t1_jaedsf6 wrote

I do think that segment missed a lot of crucial points, and focused on very near term issues that will no doubt be overcome relatively easily.

But The hallucination aspect has to be solved and it needs to happen very soon. I think once that is tackled the train won't stop. I think it will be reduced over the coming months to the degree that it becomes a non-issue in most cases fairly soon. Google has a lot riding on that.

We also shouldn't underestimate how much more useful a model with access to the internet will be over the current chatGPT. Recent events will prove very useful.

6

Liberty2012 OP t1_jaedhb1 wrote

> So I don't know how alignment will take place, but I am pretty sure that it will be a priority.

This is my frustration and concern. Most arguments for how we will achieve success come down to this premise of simply hope for the best which doesn't seem adequate disposition when the cost of getting it wrong is so high.

2

AdditionalPizza OP t1_jaecxy9 wrote

Oh ok, I thought maybe I made a slip in my post somewhere implying that.

But yeah, I think although we will all adapt very quickly to this upcoming shift to how we access the internet, I think in hindsight it will be one of the big moments we remember for the rest of our lives.

5

datsmamail12 t1_jaecgzf wrote

I'm guessing if LLM's right now can do such multiple tasks and they pretty much can easily pass the Turing test,I'm guessing by GPT5 then. GPT4 will be announced soon enough,either late 2023 or 2024 and it will be game changing,and by that time BIng AI and Bard will be great additions in the industry,so by 2026-7 we will have GPT5 so I guess that's when the curve will start to happen. These language models will prove so good at multitasking. Man the 2020s are going to be WILD! We are facing the biggest technological innovations that humanity will ever get to see right now in front of our eyes.

3

Unfocusedbrain t1_jaebvqz wrote

I agree with you and I apologize if it seemed like I was implying you were giving a deadline for AGI. That was not my intention. I just liked your realistic perspective on AI progress, instead of the “AGI is < 10 years away! Can’t wait!” hype that some people have.

And yes, there will be a huge change on the web soon, similar to the iPhone and social media revolution in 2008. It’s not only Google and Microsoft - many other companies are working on LLM-enhanced search engines. We don’t know how that will affect the world, but I think it will speed up AGI research and make the world even more different than before and after social media & smartphones.

9

DowntownYou5783 t1_jaebugj wrote

What a great and insightful post. I think you are largely on point. Our smart devices are about to get a whole lot smarter. It's not unreasonable to think we could all have something approaching a JARVIS-level intelligence (see Ironman) in our home by 2030.

ChatGPT is just the beginning. It tends to hallucinate quite a bit with difficult questions, but it can maintain a conversation better than many humans. And it's willing to be educated and admit mistakes. Later iterations from OpenAI and similar iterations from other sources (i.e. within the next 18 months) are likely to take substantial steps forward.

It's crazy that the larger public is largely unaware of what appears to be happening (although John Oliver's segment on Last Week Tonight will no doubt raise awareness).

11

hapliniste t1_jaebp9r wrote

Alignment will likely be a political issue, not a technological one.

We don't know how an AGI system would work, so we don't know how to solve it yet but it could very well be super simple technologically. A good plan would be to have two versions of the model, and have one be tasked to validate the actions of the second one. This way we could design complex rules that we couldnt code ourself. If the first model think the second model output is not aligned with the value we fed it, it will attribute a low score (or high loss) to the training element of the model (and refuse the output if it is in production).

The problem will be the 200 pages long list of rules that we would need to feed the scoring model, and make it fit most people interests. Also what if it is good for 90% of humanity but totally fuck 10%? That's the questions we will encounter, and that standard democracy might fail to solve best.

7

just-a-dreamer- OP t1_jaebcg9 wrote

That's the problem with the concept of a post scarcity society. Who decides who gets to live in a house and who gets the 4 wall appartment?

Right now, money determines where and how you live. And money is tied to employment. Money is what makes people show up at work and do their job.

It will be interesting to see how we allocate scarce resources in the future. For, as not everybody can have a house, fewer can have a house at the beach and even fewer a mansion.

4

claushauler t1_jaea5ku wrote

You forgot firearms and ammunition. Lots and lots of ammunition. But as always the truly rich - the ones who are creating this scenario - are already several steps ahead of that.

https://www.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff

3

Surur t1_jaea43q wrote

I have a naive position that AGI is only useful when aligned, and that alignment will happen automatically as part of the development process.

So even China wont build an AGI which will destroy the world, as such an AGI cant be trusted to follow their orders or not turn against them.

So I don't know how alignment will take place, but I am pretty sure that it will be a priority.

1

phriot t1_jae9xnr wrote

Not exactly what you asked, but as I sit here today, I feel like my ideal life would look something like: 2 days a week doing science of some kind, either academia or industry; 2 days a week working for a charity, likely either based around homelessness, nutrition, or education; no more than a 10 minute commute for either thing; 3 days a week, plus all the time gained from not commuting for spending time with my family, exercising, and doing hobbies.

(FWIW, I have a spouse and a house. One day we'll have kids. I'm not really in a place where I'd be satisfied with 4 walls, a UBI, and a subscription to Nature anymore.)

3

AdditionalPizza OP t1_jae9nsz wrote

Today it's just an LLM. When the next generation drops, and it's widely implemented across several products and industries, I think we will have a very different definition of "cool and useful." I can't say what all of that will be, but I do believe it starts very soon. Sooner than anyone is comfortable saying out loud. A month, maybe 2? Then from there it's like dominoes, companies adopting an ultra useful AI into their products.

8

SFTExP t1_jae9kq9 wrote

There’s no point in competing with the evolution of AI. What people should focus there energy on is making government and society adapt by giving every individual a healthy, fruitful, and satisfying life experience. Whether that be through UBI or some basic form of giving everyone the baseline of necessities. Something needs to be done to prevent a social and economic collapse. The same ole politics and economics aren’t going to bail us out of a post-singularity transformation.

18

AdditionalPizza OP t1_jae8yaq wrote

>I would wager within half a decade a multi-model proto-agi will be available that could do all the cognitive tasks a human can do at least at acceptable (but not necessarily extraordinary) levels. Not within a year, thats bonkers.

Did I imply that in my post somewhere? I don't mean anything that capable within the year, I'm saying a drastic change in the average person's life caused directly by the impact AI will have this year when the "next gen" is in-your-face on search engines and widely used instead of our primitive search today.

10