Recent comments in /f/singularity

throwaway_890i t1_j9txt4j wrote

When it doesn't know the answer it makes shit up that sounds very convincing.

I have found that when you ask "What is wrong with your answer?" when it is talking shit it tells you a problem with its own answer. When it knows the right answer it will be able to tell you what is wrong with the previous answer. I wonder whether this could be used to reduce the amount of shit it talks.

1

murph1134 t1_j9tvt74 wrote

The biggest constraint to achieving true AGI, IMO, is going to be compute resources and the cost associated with running these massive models. GPT3 is a really impressive technology, but it's also still very limited and nowhere close to true AGI. And, it's currently prohibitively expensive and resource intensive to be rolled out at scale. A true AGI is going to be exponentially more expensive and resource intensive.

The first big breakthroughs in deep learning and neural networks happened in the 60's and 70's. But, deploying those models at scale was impossible given the compute resources at the time - and it wasn't until 2010/2011 that GPUs were fast enough to train deep learning models at scale.

I don't think it's going to take 40-50 years again for compute to catch up, but the fact of the matter is, it's not just going to be a simple "spin up more compute" as these models grow. There's always a balance between software and hardware - and the physical world is always going to be a limiting factor for hardware.

I wouldn't be surprised if we see another "AI winter" - where the research and the software exist, but the hardware constrains the ability to actually get to full AGI. The good news is, AI as it stands today, even without AGI, is really damn useful - and people are finding new and innovative ways to create value with what we already have. So, it's not going to be a full on "winter" for AI, just a stagnation in the ability to deploy new and more powerful models at scale.

2

mobitymosely t1_j9tvcdq wrote

That assumes that ASI is even possible at all. We already have a network of 8 billion people collaborating on projects, and they have one big advantage—access to the real world (eyes, hands, labs, factories). It MIGHT be that there is quite a diminishing return available if you can just model our brains but increase them further in size, speed, and number.

1

Hunter62610 t1_j9tuphb wrote

I want to see the actual effort and interplay of man and machine. You asked for a story about a little robot's dream of going to space? Boring. You spent 3 days fleshing out the same prompt and writing in details, refining, illustrating, reading, ect? Awesome.

2

Tavrin t1_j9tns0j wrote

If this is true, the context window of GPT is about to do a big leap forward (32k tokens context window instead of the usual 4k or now 8k). Still I agree with you that actual transformers don't feel like they will be the ones taking us all the way to AGI (still there is a of progress that can still be done with them even without more computing power and I'm sure we'll see them used for more and more crazy and useful stuff)

9

chippingtommy t1_j9tn640 wrote

Yeah, millitary tech has different requirements to civilian tech. Rugged, stable and reliable usually takes precedence over cutting edge.

Defence contractors who make pure custom millitary silicon will still market it for civilian use if they can find a market for it, its just unlikely that silicon that can survive extreme heat or extreme g-loads has a civilian market.

1

DarkCeldori t1_j9tjpp0 wrote

It wouldnt be that easy. Not only could a wild west scenario occur if the models were released publicly but merely making it known someone has agi would put a target on their backs. Gov.s and other entities will likely want privileged access and lot of psychopaths in power that can abuse tech and kill anyone holding it.

7