Recent comments in /f/singularity
science_nerd19 t1_j9twn5c wrote
Reply to comment by DarkCeldori in What do you expect the most out of AGI? by Envoy34
Speak for yourself, friend! All of my casual sex is already consequence free ;p
Accomplished_Box_907 t1_j9tw7kg wrote
Reply to comment by Iffykindofguy in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
Who is we?
murph1134 t1_j9tvt74 wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
The biggest constraint to achieving true AGI, IMO, is going to be compute resources and the cost associated with running these massive models. GPT3 is a really impressive technology, but it's also still very limited and nowhere close to true AGI. And, it's currently prohibitively expensive and resource intensive to be rolled out at scale. A true AGI is going to be exponentially more expensive and resource intensive.
The first big breakthroughs in deep learning and neural networks happened in the 60's and 70's. But, deploying those models at scale was impossible given the compute resources at the time - and it wasn't until 2010/2011 that GPUs were fast enough to train deep learning models at scale.
I don't think it's going to take 40-50 years again for compute to catch up, but the fact of the matter is, it's not just going to be a simple "spin up more compute" as these models grow. There's always a balance between software and hardware - and the physical world is always going to be a limiting factor for hardware.
I wouldn't be surprised if we see another "AI winter" - where the research and the software exist, but the hardware constrains the ability to actually get to full AGI. The good news is, AI as it stands today, even without AGI, is really damn useful - and people are finding new and innovative ways to create value with what we already have. So, it's not going to be a full on "winter" for AI, just a stagnation in the ability to deploy new and more powerful models at scale.
mobitymosely t1_j9tvcdq wrote
Reply to comment by Silly_Awareness8207 in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
That assumes that ASI is even possible at all. We already have a network of 8 billion people collaborating on projects, and they have one big advantage—access to the real world (eyes, hands, labs, factories). It MIGHT be that there is quite a diminishing return available if you can just model our brains but increase them further in size, speed, and number.
Hunter62610 t1_j9tuphb wrote
Reply to comment by TheRidgeAndTheLadder in Seriously people, please stop by Bakagami-
I want to see the actual effort and interplay of man and machine. You asked for a story about a little robot's dream of going to space? Boring. You spent 3 days fleshing out the same prompt and writing in details, refining, illustrating, reading, ect? Awesome.
Erickaltifire t1_j9tulyh wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
Don't worry. Nuthin will happen until they discover enerjon cubes.
techhouseliving t1_j9ttu03 wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
People need to learn about accelerating acceleration.
Iffykindofguy t1_j9ttnpd wrote
Reply to comment by strongaifuturist in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
No I mean meaning its the first attempt at that scale. First crack at it. We have no clue what the limitations of these things are.
strongaifuturist OP t1_j9ts95i wrote
Reply to comment by Iffykindofguy in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
Meaning you think that AI like ChatGPT is the first crack in a society that could crumble? Well, for sure I'd say that the future is going to look very different than the past.
Iffykindofguy t1_j9tr8fo wrote
Reply to The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
uhhhhhhhhhhhhhhhhhhhhhh
​
friend this was the first crack, youre out of your mind if you think this sets limits
SmoothPlastic9 t1_j9tq74g wrote
Reply to What do you expect the most out of AGI? by Envoy34
For them to keep my free will
MajesticIngenuity32 t1_j9tq2gc wrote
Reply to comment by CellWithoutCulture in What are the big flaws with LLMs right now? by fangfried
Hippo Maximizer LLM alert! We are doomed!
Tavrin t1_j9tns0j wrote
Reply to comment by nul9090 in What are the big flaws with LLMs right now? by fangfried
If this is true, the context window of GPT is about to do a big leap forward (32k tokens context window instead of the usual 4k or now 8k). Still I agree with you that actual transformers don't feel like they will be the ones taking us all the way to AGI (still there is a of progress that can still be done with them even without more computing power and I'm sure we'll see them used for more and more crazy and useful stuff)
chippingtommy t1_j9tn640 wrote
Reply to comment by BinyaminDelta in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
Yeah, millitary tech has different requirements to civilian tech. Rugged, stable and reliable usually takes precedence over cutting edge.
Defence contractors who make pure custom millitary silicon will still market it for civilian use if they can find a market for it, its just unlikely that silicon that can survive extreme heat or extreme g-loads has a civilian market.
Nukemouse t1_j9tkywj wrote
Spielberg must be making a mint off this publicity
DarkCeldori t1_j9tktin wrote
Reply to What do you expect the most out of AGI? by Envoy34
Full Dive, immortality and humanoid biodroids.
Lawjarp2 t1_j9tkgeg wrote
Reply to What are the big flaws with LLMs right now? by fangfried
(1) Expensive to run
(2) No temporal/episodic memory
(3) Limited context
(4) Makes stuff up/hallucinates
(5) Only surface level intelligence or understanding.
DarkCeldori t1_j9tkerj wrote
Reply to comment by science_nerd19 in What do you expect the most out of AGI? by Envoy34
There are many things that can be done in VR that are impossible in the real world. Also for intimacy it can be had without stds or pregnancy in VR. Casual sex becomes consequence free.
DarkCeldori t1_j9tjwh1 wrote
Reply to comment by PM_ME_A_STEAM_GIFT in What do you expect the most out of AGI? by Envoy34
And one of the last to die from old age.
DarkCeldori t1_j9tjpp0 wrote
Reply to comment by Revolutionary_Ad3453 in What do you expect the most out of AGI? by Envoy34
It wouldnt be that easy. Not only could a wild west scenario occur if the models were released publicly but merely making it known someone has agi would put a target on their backs. Gov.s and other entities will likely want privileged access and lot of psychopaths in power that can abuse tech and kill anyone holding it.
MysteryInc152 t1_j9tjlls wrote
Reply to comment by Nukemouse in What are the big flaws with LLMs right now? by fangfried
I think so...?
Bierculles t1_j9tizs3 wrote
Reply to comment by norby2 in And Yet It Understands by calbhollo
The first AI that is better than a human at pretty much everything will really cement this. There will be a lot of coping.
nklarow t1_j9tixr5 wrote
Reply to comment by redroverdestroys in Seriously people, please stop by Bakagami-
Your only insult is calling people cops. At this point it's just fun to see how angry you're getting.
[deleted] t1_j9ticnt wrote
Reply to What do you expect the most out of AGI? by Envoy34
[deleted]
throwaway_890i t1_j9txt4j wrote
Reply to comment by sideways in What are the big flaws with LLMs right now? by fangfried
When it doesn't know the answer it makes shit up that sounds very convincing.
I have found that when you ask "What is wrong with your answer?" when it is talking shit it tells you a problem with its own answer. When it knows the right answer it will be able to tell you what is wrong with the previous answer. I wonder whether this could be used to reduce the amount of shit it talks.