Recent comments in /f/MachineLearning
Disastrous_Nose_1299 OP t1_j9k9joi wrote
Reply to comment by Blakut in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
i have to say all of the statements above to people who people who wouldn't believe what you say.
Disastrous_Nose_1299 OP t1_j9k9f3t wrote
Reply to comment by Blakut in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
Yes, it's true that god might not exist. But the point i am trying to make is that it is a possibility that God exists. This argument is for people who say it is impossible for God to exist. the argument saying that it is like saying 0=1 is unfair because we have enough to definitively prove that 0 does not equal one making it different than what I am saying, we do not have enough evidence to suggest that god exists or does not exist in a black hole.
Blakut t1_j9k9cmx wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
we don't know yet is a much clear thing to say than all the statements above tho.
Acrobatic-Book t1_j9k94l4 wrote
Reply to comment by GraciousReformer in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
The simplest example is the xor-problem (aka either or). This was also why multilayer perceptrons as the basis of deep learning where actually created. Because a linear model cannot solve it.
Blakut t1_j9k8y0f wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
>It is true currently, because what I am saying is that God has the possibility of existing. This truth stands strong because, using this logic, it is difficult to disprove the existence of God.
And i can equally say that the same thing about absolutely anything. Even about anti-god, a thing of opposite charge of god that if it exists would annihilate with god and create two gamma rays. I can say that that our universe is one where god doesn't exist, and those would be equally hard to disprove. So by this logic, anything is true at the same time, 0 = 1 etc. Makes little sense to me.
Disastrous_Nose_1299 OP t1_j9k8vz8 wrote
Reply to comment by Blakut in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
I understand that this tangent has gone off topic, and as for the second paragraph, i am not saying that something needs to be unknowable to suggest the possibility that something can exist. It just means that we don't know yet.
Blakut t1_j9k875d wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
>I think it is well worth the fact that it suggests god can exist. It gives hope to those who think god cannot exist and want god to be able to exist.
If you think god cannot exist, turning to black holes won't change your mind i'm afraid. In any case, this debate has no place here.
>i am comparing black holes and god to the mystery surrounding ai and sentience.
Well, I'm not sure what mystery you're talking about regarding AI. There's tons of complexity, in the human brain, and, presumably, in a general AI too. I'm not really sure that black holes are even a good comparison here. There are other things that are also unkowable, by default, like the position and momentum of a particle, and that's just how nature works in that case. However, nothing i know of so far suggests there is something inherently unknowable about AI.
Disastrous_Nose_1299 OP t1_j9k82y8 wrote
Reply to comment by Blakut in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
It is true currently, because what I am saying is that God has the possibility of existing. This truth stands strong because, using this logic, it is difficult to disprove the existence of God. (I do not believe in god but i like this thought experiment)
Disastrous_Nose_1299 OP t1_j9k7big wrote
Reply to comment by Blakut in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
"The argument you give takes any input, god, santa, aliens, a basketball, and gives the same answer, i.e. result. Not hard to look at it like a function."
I think it is well worth the fact that it suggests god can exist. It gives hope to those who think god cannot exist and want god to be able to exist.
"But then how do you know another person is conscious? You cannot read their mind either."
This is valid criticism; however, I fail to see how I should respond, i could say something like, "No, it is obvious that you cannot read other people's minds," which would suggest that we don't know if other people are conscious. Or i could say it is obvious that other people are conscious, and then i would fall into the trap I created.
"See, this is the problem, you conflate not understood with forever hidden from view (if we assume some things about black holes). Just because it's not understood doesn't mean it's not understandable."
I think this is valid criticism. In the future it may be understood what is behind a black hole, and this frame of this argument will be useless.
AI is sentient (god).
what?
Im sorry you misunderstood, i didn't do very well of explaining what i meant i now see that its funny because it looks like i am saying Ai is god, however i am comparing black holes and god to the mystery surrounding ai and sentience.
​
"
Brudaks t1_j9k6mo0 wrote
Reply to comment by GraciousReformer in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Because being an universal function approximator is not sufficient to be useful in practice, and IMHO is not even really a particularly interesting property; we don't care if something can approximate any function, we care whether it approximates the thing needed for a particular task; and in any case being able to approximate it is a necessary but not a sufficient condition. We care about efficiency of approximation (e.g. a single-layer perceptron is an universal approximator iff you assume an impractical number of neurons), but even more important than how well the function can be approximated with a limited number of parameters is how well you can actually learn these parameters - this differs a lot for different models, and we don't care about how well a model would fit the function with optimal parameters, we care about how well it fits the function with the parameter values we can realistically identify with a bounded amount of computation.
That being said, we do use decision trees instead of DL; for some types of tasks the former outperform the latter and for other types of tasks its the other way around.
Blakut t1_j9k6jdn wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
I don't understand what you mean. If you take away the need to prove statements, then the truth value of a statement is meaningless. Contradictory statements have equal value in this kind of world.
PHEEEEELLLLLEEEEP t1_j9k691x wrote
Reply to comment by GraciousReformer in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Regression doesnt just mean linear regression, if that's what you're confused about
Blakut t1_j9k67q9 wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
>Can you explain what it means "if a function takes any input and gives you only one output,"
The argument you give takes any input, god, santa, aliens, a basketball, and gives the same answer, i.e. result. Not hard to look at it like a function.
>AI because we cannot tell if AI is conscious, even if it is because we cannot read minds.
But then how do you know another person is conscious? You cannot read their mind either.
>This is furthermore a possibility because there are things about AI that are not well understood therefore within what we don't understand (like a black hole)
See, this is the problem, you conflate not understood with forever hidden from view (if we assume some things about black holes). Just because it's not understood doesn't mean it's not understandable.
> AI is sentient (god).
what?
yldedly t1_j9k5n8n wrote
Reply to comment by GraciousReformer in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
It depends a lot on what you mean by works. You can get a low test error with NNs on tabular data if you have enough of it. For smaller datasets, you'll get a lower test error using tree ensembles. For low out-of-distribution error neither will work.
Disastrous_Nose_1299 OP t1_j9k5n5f wrote
Reply to comment by Blakut in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
this is flawed because it ignores the idea that rather than needing to be proved, it is a virtue in its own that god is a possibility.
Disastrous_Nose_1299 OP t1_j9k5f2b wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
I'm glad we're having this conversation. I really wanted to talk about what I think. Thank you for taking some time out of your day to talk to me.
activatedgeek t1_j9k58ev wrote
Reply to comment by activatedgeek in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Not only dataset, the Transformer architecture itself seems to be amenable to in-context learning. See https://arxiv.org/abs/2209.11895
Disastrous_Nose_1299 OP t1_j9k583y wrote
Reply to comment by Blakut in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
Can you explain what it means "if a function takes any input and gives you only one output," and also what it has to do with AI because we cannot tell if AI is conscious, even if it is because we cannot read minds. This is furthermore a possibility because there are things about AI that are not well understood therefore within what we don't understand (like a black hole) it is possible AI is sentient (god).
vyasnikhil96 OP t1_j9k57az wrote
Reply to comment by ichiichisan in [R] Provable Copyright Protection for Generative Models by vyasnikhil96
I agree that the final say rests with the courts. But do you think there is something specific that we use or claim that differs from how the copyright law is currently implemented?
activatedgeek t1_j9k4z4o wrote
Reply to comment by red75prime in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Very much indeed. See https://arxiv.org/abs/2205.05055
Blakut t1_j9k4ur3 wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
Well, if a function takes any input and gives you only one output, what are you going to do with it? How useful is a logic like the one above? What is the connection with the AI part anyway, since we're not here to debate if god exists in black holes?
The better argument would go:
- does god exist?
- idk, but i see no proof of him existing, so i don't think so.
- what if he is in a black hole?
- prove it.
Disastrous_Nose_1299 OP t1_j9k4cu9 wrote
Reply to comment by Blakut in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
What has value is in the eyes of the beholder, I know this argument can be used to say it is possible the flying spaghetti monster decided to manifest itself when it did by manipulating the minds of humans as a parody of it, but i think it has value to explain why god might exist, even if this argument can be used for other things.
GraciousReformer OP t1_j9k4974 wrote
Reply to comment by yldedly in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
So it works for images but not for tabular data?
Blakut t1_j9k44e2 wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
Yes, but you can replace god with anything, so the statement loses its value, don't you think?
Disastrous_Nose_1299 OP t1_j9k9vj1 wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
is it ok if we end here, I don't see this argument producing useful discussion any further, and I have to go do something. Thanks for talking to me.