Recent comments in /f/MachineLearning

Dendriform1491 t1_j9ihc8i wrote

What do you see here?

https://www.youtube.com/watch?v=9Tt7aqHFUCU

This animation consists of geometric figures moving. But your mind may attribute mental states, intentions and even a personality to those figures.

This capability, "theory of mind", makes humans and other animals capable of attributing mental states even to inanimate objects that do not have a mind. In your case: black holes and other stuff.

5

Dendriform1491 t1_j9if6mj wrote

Ancient people did not understand natural phenomena, such as atmospheric events, astronomical events, seasonal cycles in agriculture, etc. In some cases, they came up with belief systems where supernatural entities such as deities governed those phenomena.

Today, science has explanations for many of those natural phenomena. Even with some open questions still remaining, now we understand things well enough so that we can articulate what is going on in clear terms without the need for a god of thunder, god of rain, etc.

I think you're following the steps of the early human cultures that tried to assign a God to what you perceive as unexplained phenomena. Namely: black holes, AI, sentience, etc.

9

modi123_1 t1_j9if65a wrote

>I claim that it is impossible to see what is inside a black hole, and to say that god isn't there is fundamentally an assumption.

Ok.

> I apply this analogy to artificial intelligence, claiming that because not everything is fully understood, there is room for something the engineers missed that makes it sentient.

What AI are you talking about? Every 'AI'? Some hypothetical 'tv-and-movie-AI'?

9

Disastrous_Nose_1299 OP t1_j9iepsy wrote

I claim that it is impossible to see what is inside a black hole, and to say that god isn't there is fundamentally an assumption. I apply this analogy to artificial intelligence, claiming that because not everything is fully understood, there is room for something the engineers missed that makes it sentient. I do not claim that god exists or that AI is sentient, and I apologize if I didn't make this post the easiest to start a discussion with.

−3

Disastrous_Nose_1299 OP t1_j9idtnf wrote

This topic could lead to interesting discussions and debates about the nature of consciousness and the ethical considerations surrounding the development and use of AI technology. Additionally, the comparison to the concept of God being hidden in a black hole could spark discussions about the role of faith, science, and the unknown in our understanding of the universe.

−12

LudaChen t1_j9i0vmp wrote

To put it simply, the bottleneck layers is a process of reducing dimension first and then increasing dimension. So, why do we need to do this?

In theory, not reducing dimensionality can preserve the most information and more features, which is certainly not a problem. However, for specific tasks, not all features are equally important, and some features may even have a negative impact on the results. Therefore, we need to select some features that we should pay more attention to through some means, and reducing dimensionality can to some extent achieve this function. On the other hand, increasing dimensionality is to enhance the representational ability of the network. Although the channel number of the features after increasing dimensionality is the same as that before reducing dimensionality, the latter is actually restored from low-dimensional features, and the former can be considered to be more specific to the current task.

1