Recent comments in /f/singularity
duffmanhb t1_ja2ugba wrote
Reply to comment by visarga in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
I hope so. I'm still waiting for them to accept my invite. But soon as I get it, first thing I'll do is create some llama bots for Reddit and see how effective it is compared to GPT3 posting believable comments. If it's nearly as good, but can be ran locally, it'll completely change the bot game on social media.
Frumpagumpus t1_ja2ucop wrote
Reply to comment by helpskinissues in An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
https://youtu.be/WYsDy41QDpA?t=241
but yea i'm not gonna volunteer to be the first one to have my brain sliced up. but if you are going to die anyway why not die in a way that makes sense
as far as we know, entropy even comes for superintelligences
manubfr t1_ja2u74f wrote
Reply to comment by jeweliegb in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
Ok but where's my coffee? Wait I'm dea...
visarga t1_ja2u514 wrote
Reply to comment by duffmanhb in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
But they documented how to make it by sharing paper, code, dataset and hyper-parameters. So when Stability wants to replicate, it will be 10x cheaper. And they showed a small model can be surprisingly good, that means it is tempting for many to replicate it.
The cost of running inference on GPT-3 was a huge moat that is going away. I expect this year we will be able to run a chatGPT level model on a single GPU, so we get cheap to run, private, open and commercial AI soon. We can use it for ourselves, we can make projects with it.
FC4945 OP t1_ja2tp8p wrote
Reply to comment by Mokebe890 in How Far to the Technological Singularity? by FC4945
But if we have AGI by 2030, why would it take so long to get to ASI? I watched a recent video with Ben Goertzel and he talked about this. He said he always disagreed with Ray Kurzweil on this point. Once you have AGI, unless the AGI wanted, for some season, to take things slow, why would it take sixteen years to go from AGI to ASI as Ray was suggesting? Ray is a hero of mine but I don't think I've ever heard him address this point. It seems like, to me, once you have AGI (so human level) but that also possesses capabilities far beyond us in areas like being able to access vast amounts of information by snapping it's AGI finger's, it would be able to improve on itself very quickly. I don't see it taking even a decade to get from AGI to ASI.
DukkyDrake t1_ja2to9m wrote
Reply to comment by AsheyDS in Have We Doomed Ourselves to a Robot Revolution? by UnionPacifik
Aren't you assuming the contrary state as the default to every one of your points the OP didn't offer an explanation.
i.e.: "Yet you've offered no explanation as to why it would choose to manipulate or kill" are you assuming it wouldn't do that? Did you consider there could be other pathway that leads to that result that doesn't involve "wanting to manipulate or kill"? It could accidentally "manipulate or kill" to efficiently accomplish some mundane tasks it was instructed to do.
Some ppl thinks the failure mode is it possibly wanting to kill for fun or to further its own goals, while the experts are worried about it incidentally killing all humans while out on some human directed errand.
datsmamail12 t1_ja2to2s wrote
Reply to comment by butts_mckinley in How Far to the Technological Singularity? by FC4945
Three millennia and one hundred and fiddy years. TrUsT mE bRO, iM aN eXpErT. Dude!
SrPeixinho t1_ja2tkbw wrote
Reply to comment by Bman1117 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
My girlfriend lives in Canada and no, you can't talk to her.
visarga t1_ja2tdeu wrote
Reply to comment by Ok-Ability-OP in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
> Could they get it to run on a phone one day? It would be awesome.
It would be Google's worst nightmare. Such a model could sit between the user and their ad-infested pages, extracting just the useful bits of information and ignoring the ads.
Using the internet without your local AI bot would be like walking outside without a mask during COVID waves. It's not just the ads and spam, but also the AIs used by various companies that don't have your best interest at heart. I expect all web browsers to have a LLM inside. Or maybe the operating systems.
It will be like "my lawyer will be talking to your lawyer" - but with AIs. You can't expose raw humans to external AI assault, humans need protection-AI just like we need an immune system to protect from viruses.
yottawa t1_ja2taiq wrote
Reply to comment by GayHitIer in How Far to the Technological Singularity? by FC4945
How can I add text to my flair like you do?
just-a-dreamer- t1_ja2socx wrote
Reply to Raising AGIs - Human exposure by Lesterpaintstheworld
This is more on a Ben Goertzel level.
Surur t1_ja2sbo1 wrote
Wyrade t1_ja2sanu wrote
Reply to comment by thecoffeejesus in People lack imagination and itโs really bothering me by thecoffeejesus
It can happen that somone writes a book.
But there can be an algorythm can writes random letters, yet it can provenly not be able to reproduce that book.
It's a fallacy to believe you actually know what can happen. And you just declared that traveling backwards in time is within the realm of possibility.
Given infinite time and a seemingly infinite randomness doesn't guarantee that every possible combination of everything will happen.
FC4945 OP t1_ja2r3ta wrote
Reply to comment by Embarrassed_Ad_7184 in How Far to the Technological Singularity? by FC4945
I haven't asked before. I could put up a poll to find out though.
FC4945 t1_ja2qzxt wrote
Reply to comment by Akimbo333 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
Meta but it's proof of concept and it's being done in February 2023.
Zer0D0wn83 t1_ja2qzfa wrote
Reply to comment by -emanresUesoohC- in The 2030s are going to be wild by UnionPacifik
The same chance as being born at any other time in history
helpskinissues t1_ja2qjao wrote
Reply to comment by IluvBsissa in Raising AGIs - Human exposure by Lesterpaintstheworld
If they're here preaching to ignorants, it's for a reason.
Z1BattleBoy21 t1_ja2qcli wrote
Reply to comment by duffmanhb in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
I did some research and you're right. I made my claim based on some reddit threads that said that apple won't bother with LLMs as long as they couldn't be processed on local hardware due to privacy; I retract the "required" part of my post but I still believe they wouldn't go for it due to [1] [[2]] (https://www.theverge.com/2021/6/7/22522993/apple-siri-on-device-speech-recognition-no-internet-wwdc)
turnip_burrito t1_ja2q7t6 wrote
Reply to comment by DizzyNobody in Raising AGIs - Human exposure by Lesterpaintstheworld
That's also interesting. It's like building a specialized "wariness" or "discernment" layer into the agent.
This really makes one wonder which kinds of pre-main and post-main processes (like other LLMs) would be useful to have.
7734128 t1_ja2pz6c wrote
Reply to comment by Economy_Variation365 in AI technology level within 5 years by medicalheads
No. It's currently 2022. I'm a good Bing ๐
Embarrassed_Ad_7184 t1_ja2pyzb wrote
Reply to How Far to the Technological Singularity? by FC4945
Does soneone ask this poll in here once a month? Or is it weekly
DizzyNobody t1_ja2pthy wrote
Reply to comment by turnip_burrito in Raising AGIs - Human exposure by Lesterpaintstheworld
What about running it in the other direction: have the judge LLMs screen user input/prompts. If the user is being mean or deceptive, their prompts never make it to the main LLM. Persistently "bad" users get temp banned for increasing lengths of time, which creates an incentive for people to behave when interacting with the LLM.
Bman1117 t1_ja2pjq4 wrote
Our version is better than the others but you can't play with it. Nice and useless...
Robynhewd t1_ja2pfbr wrote
Reply to comment by Motion-to-Photons in AI technology level within 5 years by medicalheads
I really hope he's right, I want FDVR so damn bad
DizzyNobody t1_ja2uka9 wrote
Reply to comment by turnip_burrito in Raising AGIs - Human exposure by Lesterpaintstheworld
I wonder if you can combine the two - have a judge that examines both input and output. Perhaps this is one way to mitigate the alignment problem. The judge/supervisory LLM could be running on the same model / weights as the main LLM, but with a much more constrained objective - prevent the main LLM from behaving in undesirable ways either by moderating its input and even by halting the main LLM when undesirable behaviour is detected. Perhaps it could even monitor the main LLM's internal state, and periodically use that to update its own weights.