Recent comments in /f/Futurology

MagicManTX84 t1_j9f93vw wrote

Freud speaks of the ego or “id”. I think to be sentient, AI would need this and would, at a minimum, be interested in self preservation and probably a lot more. In humans, behavior is regulated through morals, values, and social pressure. So how does that look for AI. If 1,000,000 social posters tell AI to “kill itself”, will it do it?

1

WinterWontStopComing t1_j9f67k0 wrote

Fun fact: the Turing test isn’t about AI being on the same level as us. It doesn’t NEED to progress that far. It is about convincing us that they (the intelligence) is able to mimic. Think the Chinese room thought experiment touches on some of these ideas but don’t quote me on that.

EDIT: in a nutshell we are subjective entities and we have to accept that every piece of reality beyond the self may not be real but may be an uncannily near approximation of reality and that the two are all but the same.

6

BayFunk36 t1_j9f5xv2 wrote

To answer that we’d need to better understand our own emotions. If we have some sort of free will then probably not but if we’re just extremely complicated chemical reactions then an extremely complicated AI could experience the same things as us.

3