Recent comments in /f/Futurology

djdefenda t1_jbz5gqz wrote

There's potential for this to be abused, for example, I am putting a chatbot in my website, and it has options to set the personality of the chatbot, mostly it is set to "answer in detail being polite, happy and helpful" and also, you can set it to be a "sarcastic smart ass".......so if there was an online resource for teenagers (like an AI version of the Kids Helpline) it also has the potential to be hacked and have new prompts inserted......even if the hack was found and fixed, imagine the potential damage if someone gave it the prompt for something nefarious.

3

dramignophyte t1_jbz41cj wrote

Okay so I don't want this to sound like I think you are wrong, because I really do think you are right but... For a doctor, you sure are making assumptions not based on research. How can I say that? Chat gpt has been around for how long? How many peer reviewed studies have come out on the potential for it to have a positive or negative effect in this scenario? Or even just studies? So, by math, I can be like 90% sure, you are basing your answer on your own personal views based on other things and applying it here, which MAY BE TRUE, I really want to emphasize that I agree with you and do not think you are wrong, I just think you are wrong for speaking on it in a way as if you are an expert on the subject when nobody is an expert on ai to human interactions and their effect on mental health, you are an expert on mental health, not AI interactions. Like your reasoning of protections against self harm? I would argue an AI program could, if not already, eventually be better at determining if someone is at risk of hurting themselves or dangerous behavior and putting in protections on privacy are also fixable problems.

5

nonusedaccountname t1_jbz3fcc wrote

The issue here isn't that children can talk to it. In fact, it's probably a useful tool for teenagers to ask questions they could get in trouble for. Like sex education in more close-minded communities. The issue is that in the example the AI wasn't able to pick up on subtle context clues over multiple messages that a human could. If an adult were told those things they would know something is wrong and could help the child, while the AI can't, even if it would understand

2

ufobaitthrowaway t1_jbyzpta wrote

Tbh I had some good conversations with Chatgpt, although there are some hiccups here and there. It's still pretty good. Although I don't necessarily need it, with Chat you can just hop into a topic without making it awkward. With people, that's a little bit more complicated. Also it removes certain borders and stigma, no judgement either. I see the positive side of it really. Everyone can benefit from it, even A.I. itself.

1

Surur t1_jbyxqvr wrote

> A confidant needs to understand the real world and human emotions, which are extremely difficult for AI systems.

ChatGPT actually shows pretty good theory of mind. I think it just needs a lot more safety training via human feedback. There is a point where things are "good enough".

−1

just-a-dreamer- t1_jbyvt7m wrote

It is dangerous, I would not do it. For two reasons.

First, AI language models do not "know" any truth or falsehood, they run a popularity contest on data sets. They give the answer the majority of humans agree upon, that doesn't mean it is the right answer. They don't "think".

Second, any personal information given to AI language models is likely recorded at some point. That is data that goes on your personal record file.

The more data you put out there, the easier it is to figure out who you are as a person. And that is giving away a giant competitive advantage in life.

13

IndigoFenix t1_jbytdxp wrote

In theory, an AI confidant would be good.

DO NOT USE CHATGPT FOR THIS.

ChatGPT is very good at looking sensible and intelligent until you start pushing the boundaries of its existing knowledge and realize that it has less actual comprehension of the real world than a toddler, and zero recognition of its own limitations except for cases where its designers have specifically trained it not to answer.

If you give it half a chance, it will confidently spout bullshit and do it in a way that makes you think it knows what it is talking about, until you happen to ask it about something you already know and realize just how little it knows and how much it pretends to.

ChatGPT is a tool for generating text that sounds good, and can help with creative writing. It is good at sounding intelligent and articulate. The actual content is not intelligent, except when copied from a human source (and it cannot tell the difference between something it read and something it made up). It is NOT human. Do not treat it as though it is.

5