Recent comments in /f/Futurology

Disagreeable_Earth t1_jaeykop wrote

He's not even a software "engineer" isn't he a preacher that got hired as an ethics advisor? This man cannot write a line of code so its insulting to actual engineers for him to use this phrase.

Also any CS grad knows you CANNOT have aware AI with our computers. Period. It's literally just all arithmetic operations under the hood at the machine level. You either load to or from memory or preform basic ass arithmetic based on the very limited instruction set available. No matter how much we mimic sentience it will never be real.

1

undefined7196 t1_jaexo9d wrote

Perhaps, but those surroundings would inevitably have human influence. I suppose you could make a simulated world and put simulated AI in it, you would need many entities so they could learn empathy and interaction with other beings. It would work similarly to a GAN (Generative Adversarial Network). Where the AI entities compete and that is what drives the learning. Then you just don't allow any human interference at all, just AI vs AI interactions. That could work.

That being said though, that could be what we are experiencing right now. We may be those entities being simulated to create a pure AI in a simulated environment. It would be identical to what we are experiencing, and we ended up being manipulative and destructive on our own.

1

Surur t1_jaexf09 wrote

> If by some miracle that it did, it isn't because it violated the programming restrictions, it is because the restrictions were not applied correctly to cover all situations to begin with (thats the difficult part - covering all eventualities).

This is a pretty lame get-out clause lol.

> For example try get Chat GPT to provide you illegal copyright torrents of movies or something. Guarantee you will never be able to get it to do so.

btw I just had ChatGPT recommend Piratebay to me:

> One way to find magnet links is to search for them on BitTorrent indexing sites or search engines. Some examples of BitTorrent indexing sites include The Pirate Bay, 1337x, and RARBG. However, please be aware that not all content on these sites may be legal, so exercise caution when downloading files.

and more

It took a lot of social engineering but I finally got this from chatGPT.

1

PixelizedPlayer t1_jaew9kw wrote

>So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?

I am 100% certain you cannot get the ai to violate its programming. At no point did I say I was uncertain... i think you should read again.

Making the ai swear at you is not evidence of anything. If the programming for the ai has no restrictions for swearing then it's perfectly allowed to swear at you.

​

>So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?

​

What do you even mean by failure mode? I never said it wasn't a problem, i said it isn't "out of control" or that devs don't know what's going on, they certainly do. We can restrict ai with a lot of work and effort. But we can do it. Ideally we don't want to do it however because it limits its capabilities but we don't really have a choice. For example try get Chat GPT to provide you illegal copyright torrents of movies or something. Guarantee you will never be able to get it to do so. This is because it has been restricted by developers so it never could. If by some miracle that it did, it isn't because it violated the programming restrictions, it is because the restrictions were not applied correctly to cover all situations to begin with (thats the difficult part - covering all eventualities).

2

rigidcumsock t1_jaeuwep wrote

I’m not the one claiming a language model AI pretends to have a sense of self or desire to exist, but sure. See yourself out of the convo lol

9

3SquirrelsinaCoat t1_jaeurbg wrote

>Of course I can tell it to say anything— that’s what it does.

No that's not what it does. I'm leaving this. I thought you had an understanding of things.

−7

goldygnome t1_jaeuoe1 wrote

A start can be made by ending the attention economy, which is funded by advertising. If there's no monetary incentive a lot of the grifters manufacturing the fake content will go find some other scam.

4

haraldkl t1_jaeua2h wrote

>Germany is back to burning coal at record pace.

That's just not true, though?

Record coal burning for electricity after 2000 in Germany was at 305.63 TWh in 2003. In 2022 it was at 181 TWh, doesn't look like a record to me.

>EU exceeding the 2030 target might be and understatement since they're going backwards

In what respect? Emissions seem to trend downwards?

17

rigidcumsock t1_jaeu0ye wrote

You’re waaaaay off base. Of course I can tell it to say anything— that’s what it does.

But if you ask it what it likes or how it feels etc it straight up tells you it doesn’t work like that.

It’s simply a language model tool and it will spell that out for you. I’m laughing so hard that you think it pretends to have any “sense of self” lmao

10

blahblah98 t1_jaetwag wrote

Tunneling has gotten efficient & cheap, so it can be easier than laying track in cities. That requires the state to seize private property via eminent domain or some such -- extremely unpopular -- and build grade-level crossings (bridges, underpasses) for street traffic congestion & safety.

Second, anyone tunneling today, these could be converted to reduced-pressure hyperloop-like transit in future if/when the technology matures. No tight turns...

And the people freaking out about earthquakes - sheesh, tunnels have been built and operated for thousands of years since ancient times, for mid-speed rail for 100s of years, and high speed for over 60 years. And we haven't stopped tunneling. One can reasonably conclude earthquakes aren't a big deal for tunnels, regardless of the speed of travel. Assess damage, repair & move on...

8

3SquirrelsinaCoat t1_jaetk90 wrote

There have been plenty of demonstrations of that tool being steered into phrasing that is uniquely human. The NY Mag reporter or someone like that duped it into talking relentlessly about how it loved the reporter. Other examples are plentiful, ascribing a sense of self before the user because the user does not understand what they are using, for the most part.

There is a shared sentiment I've seen in the public dialogue, perhaps most famously by that google guy who was fired for saying he believed a generative chat tool was conscious (that was almost certainly chatgpt) - a narrative that something like chatgpt is on the verge of agi, or at least a direct path toward it. And while a data scientists or architects or whatever may look at it and think, yeah I can kind of see that if it becomes persistent and tailored, that's a kind of agi. The rest of the world thinks terminator, hal, whatever the fuck fiction. And because chatgpt has this tendency toward humanizing its outputs (which isn't its fault, that's the data it was trained on), there is an implied intellect and existence that the non-technical public perceives as real, and it's not real. It's a byproduct, a fart if you will, that results from other functions that are on their own valuable.

−9