Recent comments in /f/singularity
Drakonis1988 t1_ja2mlrv wrote
Reply to comment by just-a-dreamer- in An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
Only if you're attached to your physical body. If not, you can discard your body and just be a brain in a vat, or just fully upload. Print a new body and go into that when you log out :P
turnip_burrito t1_ja2m4x7 wrote
Reply to Raising AGIs - Human exposure by Lesterpaintstheworld
At a glance this looks good.
Also you want a mechanism to make sure once you have the right values or behavior, your AI won't just forget it over time and take on a new personality. So you need a way to crystallize older patterns of thought and behavior.
genshiryoku t1_ja2m3p6 wrote
Reply to comment by Akimbo333 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
Of course not, it's Meta.
Ok_Sea_6214 t1_ja2m00d wrote
Reply to How Far to the Technological Singularity? by FC4945
2018-2020
The US nuclear weapons program was kept top secret until the bombs were dropped. If they hadn't been used, it might not have been revealed to the public for many years. In the same way the real work on AI is probably done in secret, and would not be revealed to the public in full until there's no more point in keeping it secret.
AGI was created before 2020, and has already evolved into ASI, which is reading this comment as I post it, if only because I'm the only person on Reddit who will even consider the possibility. It has already spawned next gen technologies that most people would consider impossible, which makes it very easy to hide them in plain sight, you've probably already encountered them first hand without realizing it.
design_ai_bot_human t1_ja2l4xs wrote
Reply to comment by Akimbo333 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
This is the most important question here. i don't know the answer
joseph_dewey t1_ja2ktmd wrote
Reply to How Far to the Technological Singularity? by FC4945
2045-2050 was my guess, but it's not an option.
bach2o t1_ja2kb47 wrote
Reply to AI that can translate whole videos ? by IluvBsissa
I mean, with the English transcripts already available, I guess you can just throw it in DeepL and have serviceable translations in many languages. So technically it is available for the masses.
Khan Academy may implement automated translation by themselves, but I think they still need translators to double check.
_sphinxfire t1_ja2k3dl wrote
Reply to How Far to the Technological Singularity? by FC4945
Define singularity
SpecialMembership t1_ja2k14x wrote
Reply to comment by Melodic_Manager_9555 in The 2030s are going to be wild by UnionPacifik
You underestimate capitalism. fusion is underfunded because governments and private players think it's impossible once someone achieves it there will be a mad rush to get their own fusion reactors and the cost will drop to near zero in one or two decades because fusion is near unlimited reliable energy.
Z1BattleBoy21 t1_ja2jtse wrote
Reply to comment by FaceDeer in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
I think LLMs running on a phone would be really interesting for assistants; AFAIK siri is required to on hardware only
Mokebe890 t1_ja2jsed wrote
Reply to How Far to the Technological Singularity? by FC4945
Still prefer to be conservative about it and somwhere between 2050 - 2100. Sure we will have astonishing changes happening right now as we speak and probably AGI by 2030 but we really should take that with grain of salt.
imlaggingsobad t1_ja2itpl wrote
Reply to comment by Melodic_Manager_9555 in The 2030s are going to be wild by UnionPacifik
I would bet against this. I think we will have way less marriage, less relationships, less sex, less kids.
[deleted] t1_ja2imyp wrote
Reply to The 2030s are going to be wild by UnionPacifik
[deleted]
YuviManBro t1_ja2idek wrote
Reply to What do you expect the most out of AGI? by Envoy34
I’m expecting an overhaul of our governmental decision making systems. I trust thé AI more than random voters for the vast majority of technical decisions
WikiSummarizerBot t1_ja2hwrn wrote
Reply to comment by Kolinnor in Is multi-modal language model already AGI? by Ok-Variety-8135
>Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to abruptly and drastically forget previously learned information upon learning new information. Neural networks are an important part of the network approach and connectionist approach to cognitive science. With these networks, human capabilities such as memory and learning can be modeled using computer simulations. Catastrophic interference is an important issue to consider when creating connectionist models of memory.
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
Revolutionary_Soft42 t1_ja2hwow wrote
This is why more people need to do drugs
Kolinnor t1_ja2hvkm wrote
My (non-expert) take :
The problem is that there are many black boxes with that.
LLMs work well when we have a huge amount of data to train the model with. In an oversimplified way, LLMs predict the next word, based on the previous data they've seen. But how to "predict the next action you'll take" ? If we had a massive amount of "sensation --> action" data (probably just like the human brain accumulates during life ?) then that would be possible. I haven't heard of a way to achieve that today, and I think it's more complicated than that anyways.
I think what's your suggesting is kinda like what they try to do with Google's SayCan : but as you can see, for the moment there's no easy way to link LLMs with physical action. LLMs manage to create plausible scenarios of what's happening, or what could be some consequences of action X, but practically it's not usable yet.
There's also the fact that, as someone pointed earlier, there are issues with continuous learning, such as catastrophic forgetting. I think many brilliant minds are actively trying to surpass those issues, but it's no easy feat.
Revolutionary_Soft42 t1_ja2hlov wrote
Reply to comment by z57 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
A mystical quantum pop up book
[deleted] t1_ja2hf1t wrote
Reply to comment by -emanresUesoohC- in The 2030s are going to be wild by UnionPacifik
[removed]
Ok_Homework9290 t1_ja2hbyy wrote
Reply to How Far to the Technological Singularity? by FC4945
I commented this on the Singularity 2023 Predictions thread, and I thought it was appropriate to comment here:
Despite the impressive amount of progress the field of AI has made in the past few years, to my understanding the majority of individuals who are AI/ML researchers still think AGI & ASI are at least a few decades out/mid-century-plus, and when you factor that in with the fact that the average timeline of AGI and ASI arrival dates in AI/ML expert surveys still tends to be some decades from now, it's hard for me personally (as someone who is not an AI researcher) for my prediction not to be at least a few decades out/mid-century-plus for AGI, ASI, & the singularity (since my definition of the singularity is when AI reaches human-level/superhuman-level cognition), as well.
Also, remember to take into account that expert predictions about when we'll have AGI/ASI are usually made assuming that progress in the field won't be disrupted by social, economic, political, etc. factors, so I wouldn't be surprised at all if the singularity didn't happen until the final few decades of the 21st century, given that it's basically a guarantee that those factors will eventually come into play.
natepriv22 t1_ja2gxek wrote
Reply to comment by Melodic_Manager_9555 in The 2030s are going to be wild by UnionPacifik
That's if you take nuclear fusion in a vacuum. But if we are assuming that other tech advances as well, spillover effects will change everything.
AI is already helping with nuclear fusion right now. Also don't underestimate how much we will be willing to spend to fight climate change.
I'm here referring to Ray Kurzweils Law of Accelerating Returns.
anaIconda69 t1_ja2gr5z wrote
Reply to comment by RowKiwi in An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
Imagine being a brain in a vat awoken from 2500000 years of nonstop heaven on fast forward when suddenly a supernova knocks out the infrastructure. Emergency power kicks in to keep you alive, but all simulations are turned off to conserve power while the attendant ASI picks up the pieces. How would that brain feel? Just a funny though :nervous chuckle:
SurroundSwimming3494 t1_ja2gp8i wrote
Reply to comment by z57 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
I agree that that book is likely going to be different than the previous ones (just like the previous ones were different than the ones that came before them), but I hope for three things:
1.That the book is authored by all of humanity, not just one industry.
2.That the book is a genuinely happy one.
3.That the book is pretty long. I cannot emphasize how important it is that change needs to be gradual, for the sake of society.
[deleted] t1_ja2gmub wrote
Reply to The 2030s are going to be wild by UnionPacifik
[deleted]
Lesterpaintstheworld OP t1_ja2mobc wrote
Reply to comment by turnip_burrito in Raising AGIs - Human exposure by Lesterpaintstheworld
At this stage this is actually surprisingly easy. People have to intentionally be very manipulativr and creative to get ChatGPT to "behave badly" now. Without those "bad actors", this behavior would almost never happen.
One easy way to do that is to preface each prompt with a reminded of values / objectives / personality. Every thought is then colored with this. The only moment I had alignment problems is when I made obvious mistakes in my code.
I'm actually working on making the ACE like me less, because he has a tendency to take everything I say as absolute truths ^^