Recent comments in /f/Futurology

MonochromeTiger t1_jaaxgcq wrote

That's just it. Anything that happens outside of what's intended is either an oversight or a bug. Hundreds of thousands of not millions of lines of code make up quality AI. All of which can and has been co-opted for specific purposes. An idea of sentience can be programmed like anything else. Emotions. Sexuality. The guy was a certified beta tester that doesn't understand the complexity of the many, many ai systems and what they're capable of, or how they work.

3

PresentationOk3922 t1_jaaxdyj wrote

I eat like a pig and I’m also a steel worker. I lift stuff I shouldn’t be lifting more then not. My buddy who works in the office goes to the gym 3-5 times a day. I’d prly go to if it wasn’t for marriage and kids. And I look like I’m in better shape. Although he’s more physically active outside on the weekends then I’d care to be. He’s always out playing basketball and biking what not. The weekend is time for me to rest my joints, but we both laugh when we actually go to the gym and I lift more then him. And we’re about the same height and build. That being said I feel like a bag of dog turds compared to him whose way more energetic.

4

ActuatorMaterial2846 t1_jaax7vx wrote

The 'grabby aliens hypothesis' is quite compelling and many astro physicists and biologists seem to consider it even more plausible.

Basically, a series of hard steps need to be accomplished, and the galaxy, at least, is too young and too hostile for it to be swarming with intelligent advanced civilisations.

Fermi paradox has been around a while and there are few other theories, 'dark forst' as an example too. So I wouldn't succumb simply to such a basic and old concept when so many great minds have come up with plausible reasons to counter the great filter.

E: I guess this isn't a thread for discussion then...

E2: I just realised I'm in r/furturogy, makes sense now...

0

Gagarin1961 t1_jaawpz0 wrote

If ASI always destroys it’s creators, then why isn’t there any evidence for their existence?

An ASI would be even less restricted than a biological civilization. Even if not all ASI’s had growth motives after exterminating their creator species, surely some would continue on and build megastructures and other detectable things.

A lack of evidence for extraterrestrial civilizations is also evidence is lack of ASI.

8

Brief_Profession_148 t1_jaauqp7 wrote

2

Interesting_Mouse730 OP t1_jaaukff wrote

Submission Statement: This is a recent article by Blake Lemoine, who famously raised the possibility of Sentience in Google's Lamda AI. In this article, he expands on his initial concerns and comments on recent AI developments. Among other points, he is alarmed that the AI narrative being controlled by corporate PR departments.

−2

Root_Clock955 t1_jaaue1h wrote

I really doubt we're past the great filter. Think of all the new tech we haven't messed around with enough YET to truly get us in trouble. Like when we start controlling our minds with tech, downloading and uploading consciousness, modifying our own genetic structure, cybernetics, real AI.

I can already FEEL the acceleration in our changing society and rapid technological pace... I never really thought the Singularity was that close, but maybe it is? Maybe still not in my lifetime, but on the horizon. What ways to kill ourselves will we be capable of then? Will humans become obsolete?

The only life meeting each other is probably something tech based or so advanced beyond us, would we be able to recognize or understand any of it? Maybe we aren't even aware of how ALIEN it can be.

I could see us being in a simulation though. But how many layers deep is the real question...

5

PixelizedPlayer t1_jaau7x4 wrote

>I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations. And it did reliably behave in anxious ways. If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for.

​

I can see why they got rid of him. He's basically saying the ai has emotions which would be nobel prize worthy and literally all over the news. He's lost his mind or just delusional/ignorant/easily fooled.

Ai cannot violate its core programming. The guy is a software engineer, this is not equivalent to an ai specialist. He isn't qualified to start with.

9

colintbowers t1_jaau3tt wrote

I'm not really making an argument here for whether Democracy is good or bad. I'm just saying it is an example of a hierarchical structure, where the people making decisions at the top are supposed to be better suited to that role than everyone else, which is exactly what OP was after. I guess my purpose in posting was to try and highlight the difficulty in setting up such a system on a large scale. Who chooses what "wisest" is? How do you measure it? The system of voting just happens to be the most successful thing we've tried so far (at large scales). You've offered some alternatives which perhaps could work better, but they've never been tested at scale, so we don't really know. Who are your Big Brain Geniuses? How do we choose them? How do we avoid the roles being captured by bad faith actors? There are plenty of really smart people who are assholes, and plenty of people who perform poorly in standardized testing, yet are full of kindness and compassion.

Also I totally agree that Democracy doesn't work so well in some countries. For example, I think a failing of the US-style Democracy is that not everyone votes (unlike, say, Australia, where it is compulsory), but I didn't want to get into that as it is somewhat orthogonal to the original post.

1

undefined7196 t1_jaatqaq wrote

I believe the great filter is simply in the nature of a creature that could evolve intelligence. They are doomed to destroy themselves because of the evolutionary path required to achieve intelligence. Evolution by natural selection requires selectors to function. Selectors mean death and a struggle. Death and struggle means competition. Competition selects for greed and ruthlessness. Greed and ruthlessness, when given world ending technology, will inevitably end the world. Therefore it is inevitable for intelligent creatures to destroy themselves.

4

hsnoil t1_jaato17 wrote

More like being forced to make compromises. If you want A you have to agree with B. Because burning plastics isn't green, even the EPA admits it does nothing if you burn plastic. The problem was the rules were vague enough stating making fuel from "waste". Usually that means from waste food and etc. And waste would also include plastic, so they have to change the rules to deny them. But these loopholes aren't by accident, so good luck getting them changed.

Ethanol is cleaner than burning gasoline, it wasn't at first but it is these days assuming you aren't cutting down a new forest for it. That said, being better is marginal, especially when talking about making it from corn. It's like at best 1.5X improvement, but you can get 500X more energy out of that corn field if it was solar charging an electric car.

1

mhornberger t1_jaati2n wrote

One filter no one before now (that I know of) seems to have thought of is low birthrates.

It seems that wealth, education (mainly for girls), empowerment for women, access to birth control, and other things we mostly consider positive also happens to lower the birthrate. I'm starting to think that wealth and education may be the 'solution' to the Fermi paradox.

−4