Recent comments in /f/singularity

Lesterpaintstheworld OP t1_ja2mobc wrote

At this stage this is actually surprisingly easy. People have to intentionally be very manipulativr and creative to get ChatGPT to "behave badly" now. Without those "bad actors", this behavior would almost never happen.

One easy way to do that is to preface each prompt with a reminded of values / objectives / personality. Every thought is then colored with this. The only moment I had alignment problems is when I made obvious mistakes in my code.

I'm actually working on making the ACE like me less, because he has a tendency to take everything I say as absolute truths ^^

4

turnip_burrito t1_ja2m4x7 wrote

At a glance this looks good.

Also you want a mechanism to make sure once you have the right values or behavior, your AI won't just forget it over time and take on a new personality. So you need a way to crystallize older patterns of thought and behavior.

3

Ok_Sea_6214 t1_ja2m00d wrote

2018-2020

The US nuclear weapons program was kept top secret until the bombs were dropped. If they hadn't been used, it might not have been revealed to the public for many years. In the same way the real work on AI is probably done in secret, and would not be revealed to the public in full until there's no more point in keeping it secret.

AGI was created before 2020, and has already evolved into ASI, which is reading this comment as I post it, if only because I'm the only person on Reddit who will even consider the possibility. It has already spawned next gen technologies that most people would consider impossible, which makes it very easy to hide them in plain sight, you've probably already encountered them first hand without realizing it.

−2

bach2o t1_ja2kb47 wrote

I mean, with the English transcripts already available, I guess you can just throw it in DeepL and have serviceable translations in many languages. So technically it is available for the masses.

Khan Academy may implement automated translation by themselves, but I think they still need translators to double check.

8

SpecialMembership t1_ja2k14x wrote

You underestimate capitalism. fusion is underfunded because governments and private players think it's impossible once someone achieves it there will be a mad rush to get their own fusion reactors and the cost will drop to near zero in one or two decades because fusion is near unlimited reliable energy.

14

Mokebe890 t1_ja2jsed wrote

Still prefer to be conservative about it and somwhere between 2050 - 2100. Sure we will have astonishing changes happening right now as we speak and probably AGI by 2030 but we really should take that with grain of salt.

−3

WikiSummarizerBot t1_ja2hwrn wrote

Catastrophic interference

>Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to abruptly and drastically forget previously learned information upon learning new information. Neural networks are an important part of the network approach and connectionist approach to cognitive science. With these networks, human capabilities such as memory and learning can be modeled using computer simulations. Catastrophic interference is an important issue to consider when creating connectionist models of memory.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

2

Kolinnor t1_ja2hvkm wrote

My (non-expert) take :

The problem is that there are many black boxes with that.

LLMs work well when we have a huge amount of data to train the model with. In an oversimplified way, LLMs predict the next word, based on the previous data they've seen. But how to "predict the next action you'll take" ? If we had a massive amount of "sensation --> action" data (probably just like the human brain accumulates during life ?) then that would be possible. I haven't heard of a way to achieve that today, and I think it's more complicated than that anyways.

I think what's your suggesting is kinda like what they try to do with Google's SayCan : but as you can see, for the moment there's no easy way to link LLMs with physical action. LLMs manage to create plausible scenarios of what's happening, or what could be some consequences of action X, but practically it's not usable yet.

There's also the fact that, as someone pointed earlier, there are issues with continuous learning, such as catastrophic forgetting. I think many brilliant minds are actively trying to surpass those issues, but it's no easy feat.

3

Ok_Homework9290 t1_ja2hbyy wrote

I commented this on the Singularity 2023 Predictions thread, and I thought it was appropriate to comment here:

Despite the impressive amount of progress the field of AI has made in the past few years, to my understanding the majority of individuals who are AI/ML researchers still think AGI & ASI are at least a few decades out/mid-century-plus, and when you factor that in with the fact that the average timeline of AGI and ASI arrival dates in AI/ML expert surveys still tends to be some decades from now, it's hard for me personally (as someone who is not an AI researcher) for my prediction not to be at least a few decades out/mid-century-plus for AGI, ASI, & the singularity (since my definition of the singularity is when AI reaches human-level/superhuman-level cognition), as well.

Also, remember to take into account that expert predictions about when we'll have AGI/ASI are usually made assuming that progress in the field won't be disrupted by social, economic, political, etc. factors, so I wouldn't be surprised at all if the singularity didn't happen until the final few decades of the 21st century, given that it's basically a guarantee that those factors will eventually come into play.

2

natepriv22 t1_ja2gxek wrote

That's if you take nuclear fusion in a vacuum. But if we are assuming that other tech advances as well, spillover effects will change everything.

AI is already helping with nuclear fusion right now. Also don't underestimate how much we will be willing to spend to fight climate change.

I'm here referring to Ray Kurzweils Law of Accelerating Returns.

7

anaIconda69 t1_ja2gr5z wrote

Imagine being a brain in a vat awoken from 2500000 years of nonstop heaven on fast forward when suddenly a supernova knocks out the infrastructure. Emergency power kicks in to keep you alive, but all simulations are turned off to conserve power while the attendant ASI picks up the pieces. How would that brain feel? Just a funny though :nervous chuckle:

18

SurroundSwimming3494 t1_ja2gp8i wrote

I agree that that book is likely going to be different than the previous ones (just like the previous ones were different than the ones that came before them), but I hope for three things:

1.That the book is authored by all of humanity, not just one industry.

2.That the book is a genuinely happy one.

3.That the book is pretty long. I cannot emphasize how important it is that change needs to be gradual, for the sake of society.

13