Recent comments in /f/MachineLearning
Blakut t1_j9ir7zh wrote
Reply to comment by IsABot-Ban in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
It affects stuff around it, but those properties can be thought to be "of the hole itself", like mass, charge etc. but we can't look inside.
cthorrez t1_j9ir0lx wrote
Reply to comment by thomasahle in Unit Normalization instead of Cross-Entropy Loss [Discussion] by thomasahle
Have you tried it with say an MLP or small convnet on cifar10? I think that would be the next logical step.
deluded_soul OP t1_j9iqtgs wrote
Reply to comment by Insecure--Login in [Discussion] ML on extremely large datasets and images by deluded_soul
The dataset is more microscopy related and unfortunately I am not allowed to share :(
thomasahle OP t1_j9iq4rz wrote
Reply to comment by cthorrez in Unit Normalization instead of Cross-Entropy Loss [Discussion] by thomasahle
Should have said Accuracy.
Only MNist though. Went from 3.8% error on a simple linear model to 1.2%. In average. With 80%-20% train test split. So in no way amazing, just interesting.
Just wondered if other people had experimented more with it, since it's also a bit faster training.
cthorrez t1_j9iq35y wrote
> test loss decreased
What function are you using to evaluate test loss? cross entropy or this norm function?
IsABot-Ban t1_j9ipwue wrote
Reply to comment by Blakut in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
While I agree with your sentiment on the whole... we do get some measurements on a black hole precisely because it affects things outside of itself. I'll agree with the rest as I've been studying ai. We definitely can and often do understand the paths. The reality is it would take us far longer to go through it all or ai would be pointless.
IsABot-Ban t1_j9ipqib wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
This we do not understand is complete bs. Just we can't run through the math in any reasonable time frame. Effectively we know how, just we don't know which exact path without marking it... which we can and will do sometimes.
yldf t1_j9ipprk wrote
Reply to [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
While r/MachineLearning might in parts attract a bit less scientific crowd than other CS-related subs, expecting them to take this seriously is still very much of a stretch…
Blakut t1_j9ipddh wrote
Reply to [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
What you are alluding to is god of the gaps, not a black box theory. The mistaken belief that putting god in ever difficult places to find will, as more and more things are discovered and explained, somehow maintain his presence in this world.
As an (astro)physicist i think the only connection between the black box of AI and the black hole is the world black. Nobody is stopping you from opening the black box of AI and looking inside at the numbers. Whether that helps you or not is an entirely different matter. You can never do that with a black hole. No matter what technology you use, or what tool, you can't peer inside the black hole. And nothing of what happens inside influences what's outside, unlike the "black box" of AI.
The only point that makes sense is that little part at the end. Yes, an AI could've published this text, but even an AI that could cobble together this long text wouldn't make the mistake of comparing a black hole with a black box. Or would it? Who knows. Better question: does it matter?
BoiElroy t1_j9ipbtg wrote
This is not the answer to your question but one intuition I like about universal approximation theorem I thought I'd share is the comparison to a digital image. You use a finite set of pixels, each that can take on a certain set of discrete values. With a 10 x 10 grid of pixels you can draw a crude approximation of a stick figure. With 1000 x 1000 you can capture a blurry but recognizable selfie. Within the finite pixels and the discrete values they can take you can essentially capture anything you can dream of. Every image in every movie ever made. Obviously there are other issues later like does your models operational design domain match the distribution of the training domain or did you just waste a lot of GPU hours lol
BoiElroy t1_j9ioqcz wrote
Reply to comment by relevantmeemayhere in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Yeah always should first exhaust existing classical methods before reaching for deep learning.
QuantumFTL t1_j9io708 wrote
Reply to comment by Animated-AI in [P] The First Depthwise-separable Convolution Animation by Animated-AI
I'd absolutely love to see those, if you're willing :)
adventuringraw t1_j9in5sj wrote
Reply to comment by relevantmeemayhere in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
I mean... the statement specifically uses the phrase 'arbitrary functions'. GLMs are a great tool in the toolbox, but the function family it optimizes over is very far from 'arbitrary'.
I think the statement's mostly meaning 'find very nonlinear functions of interest when dealing with very large numbers of samples from very high dimensional sample spaces'. GLM's are used in every scientific field, but certainly not for every application. Some form of deep learning really is the only game in town still for certain kinds of problems at least.
sam__izdat t1_j9imyry wrote
Reply to comment by blueSGL in [R] ChatGPT for Robotics: Design Principles and Model Abilities by CheapBreakfast9
I have never seen it generate any code that is correct-in-principle, let alone usable, for any non-trivial problem. It may be useful as a kind of impressionist painting of a solution, for those who are already programmers. And for trivial code, you'd frankly be better off just learning to code.
In other words, I don't really see this being remotely useful to someone who doesn't know how to code. If anything, the barrier to entry is higher, because you will need to debug extremely unusable but convincing-looking programs. It's at best a hint or a template and at worst a hinderance.
[deleted] t1_j9imqri wrote
Darkest_shader t1_j9im65u wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
Lex Fridman has indeed been worked on AI, but it is clear that you haven't, so you obviously do not understand the point Lex made at all.
Darkest_shader t1_j9im0p3 wrote
Reply to comment by [deleted] in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
>yes every ai is not fully understood
A simple decision tree is an AI algorithm too. Would you claim that it is not fully understandable or that it has the potential to be sentient?
sam__izdat t1_j9ilz50 wrote
Reply to comment by limpbizkit4prez in [R] ChatGPT for Robotics: Design Principles and Model Abilities by CheapBreakfast9
Why write 5-10 lines of code, when an LLM can write 5-10 lines of code wrong, in a subtle but vaguely plausible-looking way, so that you can spend twice as long debugging the 5-10 lines of code?
relevantmeemayhere t1_j9ilsax wrote
Reply to comment by [deleted] in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
?
Dendriform1491 t1_j9ijzoq wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
For discussions about the existence of god and similar topics, visit https://www.reddit.com/r/philosophy
[deleted] t1_j9ijb65 wrote
Reply to comment by relevantmeemayhere in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
[deleted]
relevantmeemayhere t1_j9ij6cc wrote
Lol. The fact that we use general linear models in every scientific field, and have been for decades should tell you all you need to know about this statement.
Disastrous_Nose_1299 OP t1_j9iipyd wrote
Reply to comment by Dendriform1491 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
I see your point however, it is only a possibility that god was simply made up. I believe god has been made up however, I am not able to confirm it by going back in time and seeing for myself if god was made up.
Disastrous_Nose_1299 OP t1_j9iihwn wrote
Reply to [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
I have to go to bed now, I'm not going to be able to defend my ideas anymore. If you were offended at this post or got mad at me, I am sorry. I thought this was a good idea.
TimelyStill t1_j9ird09 wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
But these are philosophical questions, not scientific questions. "Could God be hidden in black holes" is unknowable in the same way that "Is God a flying spaghetti monster?" is unknowable. It's not an interesting scientific question because it has nothing to do with the scientific problem of how black holes work, but with the philosophical problem of whether there is a God.
And just because engineers don't usually understand what their AI models do 'under the hood' doesn't mean they can't be understood. They are fundamentally just very complex decision trees and you could in principle see why each decision in a model was made in a certain way. It'd just take a very long time.