Recent comments in /f/singularity

Ohigetjokes t1_jdstitg wrote

The singularity is a statistical inevitability, one that has likely repeated itself billions of times over across the universe already.

The only improbability is that in the grand scheme of human history we experienced natural childbirth and unassisted cognitive function. But that state of humanity’s existence will seem so distant one day it will be regarded as an ancient myth by many.

1

CompressionNull t1_jdst88z wrote

Sure. That is definitely possible. We really have no idea how ASI will act in any regard.

Maybe a natural part of having intelligence would include some degree of curiosity. Its not absurd to imagine a scenario where ASI will want to explore phenomena like blackholes, sample the material from neutron stars, etc.

In any case, if life is possible on even a couple of thousand other stars out of the 100 billion in our galaxy, it would also be possible that life/intelligence evolves in a similar way more than a handful of times. If each of these aliens create their own ASI, then all types of different scenarios would probably play out. I would imagine that wanting to secure itself from a singular planetary catastrophe and extinction event by spreading out would not be out of the question for at least one of these ASI entities.

1

robobub t1_jdst84e wrote

Why? Each of those tokens is O(1) and it is predicting each one incrementally, taking into account the ones it has just generated. So the full answer has taken O(m) where m is the number of tokens.

If it is possible for GPT to do 1+1, it can do a large number of them incrementally. It's not smart enough to do it all the time (you'll have more success if you encourage GPT to have a train of thought reasoning) but it's possible.

1

Ok_Faithlessness4197 t1_jdsskog wrote

I absolutely agree, it's multiplication algorithm is very slow, very inefficient, and very different from the way a calculator would handle it. I think it does differ too from how you're considering it, though. It's more than just a really good text predictor. It can use logic and solve novel problems in many unprecedented ways. Here, I would argue, it has a greater-than-superficial understanding of the math algorithm it used to multiply numbers. Can I ask how you'd define an algorithm, and what you'd consider "running a multiplication algorithm"?

−2

inigid t1_jdssdcc wrote

Yes, and a lot of divorces and breakups happened; people were miserable.

But that isn't what I am talking about. I was mostly talking about losing the sense of purpose and camaraderie that a significant number of people obtain from their work life.

This isn't a case of waving a magic wand and everything will be hunky dory.

I have no idea how anyone can beat on my comment, it simply suggests that during the transition, people will be required to help others. It isn't clear to me why this is such a hot topic.

1

jsalsman OP t1_jdsrrur wrote

> Then molecular nanotech went out of fashion.

But it's still a crucial component of AGI catastrophe lore, just underplayed by people who conclude by saying things like "and that will be the end of cellular life."

> engineer proteins to fold up into the machines

Actually he emphasized "nano-assembers" which were far less bioengineering and more novel materials science for which there was no foundational support.

4

robobub t1_jdsrlbi wrote

While GPT-4 is autoregressive, it takes into account the tokens it has chosen to generate incrementally. So it is only limited to O(1) if it attempts to answer with the correct answer immediately. It can in theory take O(m) steps, where m is the number of intermediate tokens it predicts.

1

0382815 t1_jdsrl52 wrote

What you did was prompt it to multiply. For the third time this thread, I will tell you that what it is doing is not running a multiplication algorithm. It is guessing the next token based on the preceding tokens. The model is large enough to predict correctly in this case. It is still not running a multiplication algorithm the same way the calculator app on Windows does.

6

inigid t1_jdsrk30 wrote

why are you assuming I am talking about myself?

I'm going to be fine. I'm used to my own company. There are many who aren't.

My comment simply called for empathy and understanding, and it is exactly this kind of rhetoric that you are showing here that is the problem.

Zero compassion or empathy.

Just play video games. ffs.

−2

pleasetrimyourpubes t1_jdsqtfe wrote

Drexler was selling sensational science fiction books as fact and Smalley was just skeptical of it. The idea of industrial nano technology operating outside of the confines of organic chemistry is and always will be science fiction, particularly the self-replicating kind. Drexlers machines and concepts were so far beyond the realm of physical nature that it's a shame Smalley didn't get to live a few years longer to really rebut Drexler. In the end Drexler at least conceded Grey Goo couldn't happen accidentally and would have to be engineered (though I would posit that even if you engineered it it would die as soon as it stripped the atmosphere away or hit lava; again due to the physical constraints nanosystems must exist in).

1