Recent comments in /f/MachineLearning

bloodmummy t1_j931005 wrote

Reply to comment by gunshoes in [D] Please stop by [deleted]

A family member now thinks he knows more about ML than I do because he read 2 articles on ChatGPT and figured out how to prompt it... I'm literally doing a PhD in ML...

18

Sphere343 t1_j92y4se wrote

Reply to comment by gwern in [D] Please stop by [deleted]

A AI can literally theoretically change from being not sentient to being so if it gains enough information in a certain way. As for the specific way? No clue cause it hasn’t been found yet. But in data gathering and self improvement a AI could become sentient if the creators didn’t but some limits or if the creators programmed the self improvement in a certain way.

Would it truly be sentient? Unknown. But what is for certain is even if the AI isn’t sentient but has gained enough information to respond in any circumstance it will seem as if it is. Except for the true creative skills of course. Kinda have to be truly sentient to create brand new detailed ideas and stuff.

0

Sphere343 t1_j92woql wrote

Reply to comment by Username912773 in [D] Please stop by [deleted]

Yes indeed that’s what it seems a lot of these people seem to think. But the thing is AI being self aware of sentient isn’t that bad of a thing as long as it is done correctly it is really good which is contrary to all that. As first off a AI just being created and being sentient is literally just like suddenly having a baby, you need to raise it right. For a Ai you need to give it as unbiased information as possible, make it clear about what is right and wrong and don’t give the AI a reason to hate you (abuse it, try to kill it) the AI may turn out good just like any other human or turn bad just like many others.

And the best way to make a sentient Ai with out all these problems? Base it on the human brain. Create emotional circuits and functions for each individual emotion and so on. The tech and knowledge for all this stuff isn’t here of course so we can’t do this currently. However in the future the best way to really realistically create a sentient AI is to find a way to digitize the human brain. It’s possible given our brain works as a organic “programming” of sorts with all the Neutron networks and everything.

Major Taboo of AI is don’t do stupid stuff. Don’t give unreasonable commands that can make it do weird things like saying do something by any means. Don’t feed the AI garbage information. And most certainly don’t antagonize a sentient AI. Also i believe personally a requirement for AI is to be allowed to be created and be sentient is to basically show that the AI would have emotions circuits and as such can train the AI in what is good and bad.

If a AI doesn’t have any programming to tell a right from a wrong naturally a Sentient AI would be dangerous. Which I think is the main important problem. Kinda rambled but anyways yeah they indeed should be created but more when we have the knowledge I mentioned.

4

Username912773 t1_j92rzza wrote

Reply to comment by loga_rhythmic in [D] Please stop by [deleted]

I think it might be seen as something to fear, a truly sentient machine would have the ability to develop animosity towards humanity or develop a distrust/hatred for us in the same way we might distrust it.

It also might be seen as something that makes being human entirely obsolete.

−4

InterlocutorX t1_j92rih0 wrote

All these posts do is make the signal to noise ratio worse, because this is also noise. If you want to ask a mod why they aren't moderating, send a message to a mod.

Otherwise, downvote and scroll on.

0

zackline t1_j92jeff wrote

Reply to comment by Deep-Station-1746 in [D] Please stop by [deleted]

> Isn’t this kind of high-quantity-low-quality trend inevitable after some threshold popularity of the base topic?

I think not as on /r/covid19 they stayed on top of it. There they enforced strict rules keeping the discussion focused on science.

Here it seems it’s acceptable for teenagers to post their opinion. The rules or their enforcement seem more lax.

1

Mnbvcx0001 t1_j92jb58 wrote

I am not a scientist or PhD holder but really fascinated by what ML can do and thus leveling up through a bootcamp to learn DS and ML. My question is how to get into ML research while doing my day job? I am interested in how ML can be used for CV as well as areas of cybersecurity. How should a person like me go about researching a simple topic and collaborate with more experienced community? TY for any guidance.

1