Recent comments in /f/MachineLearning
Dendriform1491 t1_j9ihc8i wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
What do you see here?
https://www.youtube.com/watch?v=9Tt7aqHFUCU
This animation consists of geometric figures moving. But your mind may attribute mental states, intentions and even a personality to those figures.
This capability, "theory of mind", makes humans and other animals capable of attributing mental states even to inanimate objects that do not have a mind. In your case: black holes and other stuff.
Disastrous_Nose_1299 OP t1_j9iha45 wrote
Reply to comment by modi123_1 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
do i need to give timestamps? if yes, I will find them for you.
modi123_1 t1_j9ih7h1 wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
You have failed to provide the exact context to that summary, and just leaning on name dropping is poor form, Jack.
Disastrous_Nose_1299 OP t1_j9ih0hj wrote
Reply to comment by modi123_1 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
he literally said engineers dont know how ai works fully and goodbye.
modi123_1 t1_j9igxqg wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
Aight, well I find your paraphrasing about a discussion on a podcast that hints at an 'air of mystery' coupled with your exaggerated generalities to be sufficiently lacking to continue this.
Adios, muchachos.
Disastrous_Nose_1299 OP t1_j9igekn wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
Im not familiar with smaller ais that aren't chat gpt that is my bad.
Disastrous_Nose_1299 OP t1_j9igcq4 wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
I'm not familiar with AI that plays and operates video games; I'm not a professional, so I'm not sure about smaller AIs. That is an inaccuracy from my behalf
Disastrous_Nose_1299 OP t1_j9ig56h wrote
Reply to comment by modi123_1 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
Lex Freidman explained on the Joe Rogan podcast that there is an air of mystery surrounding how ai works, that is my source, i believe he has worked on ai.
modi123_1 t1_j9ifyng wrote
Reply to comment by [deleted] in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
I would disagree with your infinitely large broad brush strokes slathered on there. Log files exist for a reason.
[deleted] t1_j9ifhel wrote
Reply to comment by modi123_1 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
[deleted]
Disastrous_Nose_1299 OP t1_j9ifb16 wrote
Reply to comment by Dendriform1491 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
I am an athiest
I do not believe in god
SankarshanaV t1_j9if980 wrote
For the input, each kernel is acting upon ONE channel only, right?
But in general, shouldn't the number of channels of the kernel be equal to that of the previous layer?
Dendriform1491 t1_j9if6mj wrote
Reply to [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
Ancient people did not understand natural phenomena, such as atmospheric events, astronomical events, seasonal cycles in agriculture, etc. In some cases, they came up with belief systems where supernatural entities such as deities governed those phenomena.
Today, science has explanations for many of those natural phenomena. Even with some open questions still remaining, now we understand things well enough so that we can articulate what is going on in clear terms without the need for a god of thunder, god of rain, etc.
I think you're following the steps of the early human cultures that tried to assign a God to what you perceive as unexplained phenomena. Namely: black holes, AI, sentience, etc.
modi123_1 t1_j9if65a wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
>I claim that it is impossible to see what is inside a black hole, and to say that god isn't there is fundamentally an assumption.
Ok.
> I apply this analogy to artificial intelligence, claiming that because not everything is fully understood, there is room for something the engineers missed that makes it sentient.
What AI are you talking about? Every 'AI'? Some hypothetical 'tv-and-movie-AI'?
Disastrous_Nose_1299 OP t1_j9iepsy wrote
Reply to comment by modi123_1 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
I claim that it is impossible to see what is inside a black hole, and to say that god isn't there is fundamentally an assumption. I apply this analogy to artificial intelligence, claiming that because not everything is fully understood, there is room for something the engineers missed that makes it sentient. I do not claim that god exists or that AI is sentient, and I apologize if I didn't make this post the easiest to start a discussion with.
modi123_1 t1_j9iegdw wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
I see a large number of nebulous claims, and little in the way of starting a discussion. Good luck with that.
Disastrous_Nose_1299 OP t1_j9idtnf wrote
Reply to comment by modi123_1 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
This topic could lead to interesting discussions and debates about the nature of consciousness and the ethical considerations surrounding the development and use of AI technology. Additionally, the comparison to the concept of God being hidden in a black hole could spark discussions about the role of faith, science, and the unknown in our understanding of the universe.
modi123_1 t1_j9idmvj wrote
Reply to [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
What is your discussion point?
Insecure--Login t1_j9ibbc4 wrote
Sorry, this is a bit off-topic but what medical imaging datasets are u working with? I'm usually looking for those and you seem to be familiar with very large ones.
[deleted] t1_j9i6ld5 wrote
[removed]
Valachio OP t1_j9i5eh3 wrote
Reply to comment by Superschlenz in [D] What's the best way to capture a person's 3D likeness right now? by Valachio
Anytwo? I'm looking that up and can't find anything on it.
brucebay t1_j9i2kd6 wrote
Reply to comment by cccntu in [P] minLoRA: An Easy-to-Use PyTorch Library for Applying LoRA to PyTorch Models by cccntu
Thank you for this clear explanation.
LudaChen t1_j9i0vmp wrote
To put it simply, the bottleneck layers is a process of reducing dimension first and then increasing dimension. So, why do we need to do this?
In theory, not reducing dimensionality can preserve the most information and more features, which is certainly not a problem. However, for specific tasks, not all features are equally important, and some features may even have a negative impact on the results. Therefore, we need to select some features that we should pay more attention to through some means, and reducing dimensionality can to some extent achieve this function. On the other hand, increasing dimensionality is to enhance the representational ability of the network. Although the channel number of the features after increasing dimensionality is the same as that before reducing dimensionality, the latter is actually restored from low-dimensional features, and the former can be considered to be more specific to the current task.
Rubberdiver t1_j9hz9fz wrote
Reply to comment by ParanoidTire in [D] Simple Questions Thread by AutoModerator
Great help 🤦♂️
[deleted] t1_j9ihf4g wrote
Reply to comment by Dendriform1491 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
[deleted]