Recent comments

AdditionalPizza OP t1_jegl6jr wrote

Well we know the version of LaMDA that Bard uses is not based on the best model they have, for a fact, we know this. Which is why I'm asking the question, what's the point in Bard being released how it is? Pichai recently even reiterated that Bard is weak and not even close to their better models.

It just doesn't make sense. Google is definitely not further behind in general, every preview they have given has been exception except Bard. There's no way Google shows off PaLM-E then winds up like Blockbuster.

Besides Google is so fucking massive, I don't think companies that large can plummet.

0

AutoModerator t1_jegl6c1 wrote

Hello commentor! In order to encourage our users to participate constructively we have added a tiered system for participation!

If you want to use the letter "u" on this subreddit you need to first get 1000 updoots on your comments here!

We are confident that this new change will ensure everyone makes comments that other people like and with this completely removing bad behaviour! Thank you, have a nice day and happy updoot chasing!

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

MassiveWasabi t1_jegl538 wrote

Just for reference this paper showed why the safety testing was actually pretty important. The original GPT-4 would literally answer any question with very useful solutions.

People would definitely be able to do some heinous shit if they just released GPT-4 without any safety training. Not just political/ethical stuff, but literally asking how to kill the most people for cheap and getting a good answer, or where to get black market guns and explosives and being given the exact dark web sites to buy from. Sure, you could technically figure these things out yourself, but this makes it so much more accessible for the people who might actually want to commit atrocities.

Also consider that OpenAI would actually be forced to pause AI advancement if people started freaking out due to some terrible crime being linked to GPT-4’s instructions. Look at the most high profile crimes in America (like 9/11) and how our entire legislation changed because of it. I’m not saying you could literally do that kind of thing with GPT-4, but you can see what I’m getting at. So we would actually be waiting longer for more advanced AI like GPT-5.

I definitely don’t want a “pause” on anything and I’m sure it won’t happen. But the alignment thing will make or break OpenAI’s ability to do this work unhindered, and they know it.

10