Recent comments in /f/Futurology

demauroy OP t1_jbybmk7 wrote

I meant that real people hold a lot of opinions that is not backed by proper knowledge, just by applying a general principle that is not relevant to the conversation. Something like people mixing Radio emissions and radioactive emissions and being afraid of 5G waves (or wifi for that matter).

2

Jasrek t1_jbyb6zj wrote

I mean, that is worrisome, but not for the reason you're implying.

This is how technology gets neutered to the point of complete uselessness.

"A program that can answer questions? But what if a child asks questions! They could ask any question at all and be given answers, even if the contextual nature of the question makes it inappropriate in ways a program can't possibly understand! Quick, it must be destroyed! Destroyed immediately for the sake of the children!"

I'm reminded of how people were worried that kids playing Dungeons & Dragons would result in them sacrificing their friends to Satan. What the heck is stopping the kid from googling "how to hide a bruise"? Literally nothing. I just did it, the first result is a 'how to' video on YouTube so I can be shown how to do it properly. Yet somehow this chat program is a horrible, terrible menace.

5

Jasrek t1_jbyagt1 wrote

> You cannot trust any information or advice it gives you, hell you can convince it that 1+1=3

So, if you give it incorrect information, it provides you with incorrect information?

I am shocked, shocked, to be told that a computerized system operates on the principle of "Garbage in, garbage out".

3

Surur t1_jby9wy6 wrote

ChatGPT says with an attitude like yours, you will be "left behind an in increasingly AI-driven world" and suggests you should "seek to understand the potential of AI and how it can be used to solve complex problems in a variety of fields, including healthcare, finance, and transportation."

3

O_for_a_muse_of_fire t1_jby8rbd wrote

I can't think outside of the box or see the forest for the trees, so my thinking buddy would point out if there's a better and/or easy way to solve problems, remind me of things I need to do, and keep me on schedule. I would name him "Mycroft," and he would speak to me in Mark Gatiss' voice and be just as snarky as he was on "Sherlock." I wouldn't care if I had to pay extra for copyright and/or royalties. It would be freakin' WORTH IT!

2

Taxoro t1_jby8hs8 wrote

People need to stop thinking chatgpt and any other ai's have actual intelligence or can give proper information or adivce.. they can't.

Chatgpt has no idea what it's taking about, it just spews out sentences that sound human like. You cannot trust any information or advice it gives you, hell you can convince it that 1+1=3

19

mhornberger t1_jby7t2y wrote

Not for social ills like racism, no. But some social ills are due to problems that technology can in principle address. Such as controlled-environment agriculture, cultured meat, cellular agriculture, and other tech incrementally addressing food security and water security, by reducing the amount of arable land and water needed to produce your food. Or by electrification, renewables (and/or nuclear), and BEVs incrementally reducing the problems associated with fossil fuel dependence.

To me pollution is a technology problem. To others it's a social problem. But people are going to way, say, transportation. An ICE Lada burns the same fuel whether it was made in a capitalist auto plant or one under communism. You need to replace the ICE vehicle with better technology. Mass transit exists too, but many people still want or need automobiles. It would be silly to forego electrification until that hypothetical future date when we've changed society so no one wants or needs an automobile.

2

iuytrefdgh436yujhe2 t1_jby7iw1 wrote

Kessler Syndrome is a search term for anyone curious to learn more about the problem of space junk. It is a hypothesis that suggests orbital debris could create cascading destruction that simultaneously destroys basically everything we have in orbit and creates a debris field dense enough that it would effectively cut ourselves off from orbital entirely. It's a proposed 'great filter' in the Fermi Paradox, suggesting that potentially advanced civilizations can inadvertently hamstring their ability to leave the planet because of this.

This is also, as an aside, why we should really want to avoid space-to-space combat.

2

JoshuaACNewman t1_jby0lu4 wrote

Yes and no. Eliza did a great job, too, just by repeating things back.

The problem with ChatGPT is that it knows a lot but doesn’t understand things. It’s effectively a very confident mansplainer. It doesn’t know what advice is good or bad — it just knows what people say is good or bad. It hasn’t studied anything in depth; or, more accurately, it doesn’t have the judgment to know what to study with remove and what to believe because it only knows what people say.

I say this because, just like autocomplete was suggesting to Corey Doctorow the other day that he ask his babysitter “Do you have time to come sit [on my face]?” It doesn’t know what’s appropriate for a situation. It only knows what people think is appropriate for a situation. It’s appropriate to ask someone to sit on your face when that’s your relationship. It’s not appropriate to ask the babysitter. “Sit” means practically opposite things here that are similar in almost every way except a couple critical ones.

−1

leaky_wand t1_jbxxv4x wrote

I would probably have a few. I’d want a responsible one, a fun one, and a crazy one. It’d be like a conversation tree in an RPG where there’s always a good answer, a silly answer, and an insane answer. I need that option to make me laugh (and occasionally listen to).

6