Recent comments in /f/singularity

Mortal-Region t1_j9vjeyi wrote

  1. There has yet to be a singularity event in the Milky Way which would suggest that humans are the first technologically advanced civilization in the galaxy since it's formation, which is statistically very unlikely.

If the Great Filter is in our past, then it's not unlikely for us to be the first technological civilization. In fact, we should expect to be the first because it's unlikely for multiple species to make it through the filter simultaneously. I guess it's still weird that we made it through the filter in the first place. But it's the same kind of weirdness that arises from the fact that we happen to be intelligent humans rather than slugs. It's only weird if you're intelligent enough to think about such things.

What is weird, I think, is that we happen to exist right at the moment of the singularity. A galactic civilization would have a lifespan of billions or even trillions of years, yet here we are, witnessing the birth of AI, space-travel, etc -- the very tech that makes galactic civilization possible. The first electronic computer was invented less than 80 years ago!

4

Ortus14 t1_j9vj5cx wrote

Very wordy way to say, we'll release progressively more powerful models and figure out the alignment problem as we go along.

That being said, it's as good a plan as any and I am excited to see how things pan out.

4

nillouise t1_j9vj3kz wrote

>Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

It only say that they want AI to benefit human, exclude benefit AI itself, if AI smart enough, it will satisfy with this announcement?

So apperently, we can conclude that currenlty AI is not smart enought to do that. If oneday, openAI announcement consider the AI feeling, then the big thing come.

1

purepersistence t1_j9viz1k wrote

The post is about LLMs. They will never be AGI. AGI will take AT LEAST another level of abstraction and might in theory be fed potential responses from a LLM, but it's way too soon to say that would be appropriate vs. a whole new kind of model based on more than just parsing text and finding relationships. There's a lot more to the world than text, and you can't get it by just parsing text.

2

purepersistence t1_j9vi0cg wrote

There's a limit to the quality of output you get from a model that's attempting to generate the next logical sequence of words based on your query. There's no understanding of the world. Just text and parsing and attention relationships. So there's no sanity check at any level that understands the real-world meaning vs. patterns of text. That why in spite of improvements, it will continue to give off the wall answers sometimes. Attempting to shield people from outrageous or violent content will also tend to make the tool put a cloak in front of the value it could have delivered. That's why when you see it censoring itself, you get a lot of words that don't say much other than excuses.

3

Jayco424 t1_j9vh7q4 wrote

Of course it's not, but that doesn't mean it's somehow worthless. The Fermi-Paradox and the vast body of works - and potential solutions - surrounding it are one of the most sound logical hypothesis - or rather a series of logical conjectures - about how in vast universe full of opportunities for life, we have so far observed none. It have been the subject of a mountain of serious scholarly work for the past 70 years since it's postulation, and to this day scientists in the search for extraterrestrial life are still posing various answers to it.

3

Shiyayori t1_j9veev5 wrote

  1. Technological singularises result in a culture disconnected from the desire to expand endlessly for no reason; it’s possible after just a few generations that virtual reality is that much more preferable. Any expansion is done out of need not want,

  2. Life as we are is simply that rare.

7