Recent comments in /f/philosophy
fitzroy95 t1_j29qejz wrote
Reply to comment by AllanfromWales1 in We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
Problem 2: being able to discuss them honestly without being drowned out and silenced by propaganda and misinformation from the rich and powerful, who are far more interested in amassing more wealth and power than actually addressing any issues.
[deleted] t1_j29oigf wrote
[removed]
[deleted] t1_j29myce wrote
[removed]
rushmc1 t1_j29lqoa wrote
Reply to We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
But do we have the resources to rise above our tribal instincts?
[deleted] t1_j29kej3 wrote
[removed]
[deleted] t1_j29jx1i wrote
[removed]
PHONES_RODIA t1_j29jqlx wrote
Reply to comment by whodo-i-thinkiam in We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
Another tribe's values.
PSlanez t1_j29jgnq wrote
Reply to We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
Most people look towards billionaire entrepreneurs to lead us to solving the worlds problems when their very existence is the biggest problem itself
SanctusSalieri t1_j29dyp4 wrote
Reply to comment by monsantobreath in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Yeah, you asked the same question and the answer has not changed. What do you expect? There's no bad take in saying that death camps are relevant to any discussion of Eichmann and the most notable feature of the Nazi regime. I genuinely don't understand what your issue is, your entire behavior here is inscrutable.
postart777 t1_j29ct6e wrote
Reply to We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
The tiny tribe of 2,666 billionaires override any good faith intentions of all the rest of us.
Rhiishere t1_j29aun0 wrote
Reply to comment by AndreasRaaskov in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Well there’s some here that I agree with and some I don’t. You can’t program an AI to have the same moral code as humans do. At the end of the day it’s a machine based on logic, and if our morals don’t align with that logic than nobody wins. For GPS, it’s the same thing, you say it makes ethical choices unawares, but those are what are ethical to you and other human beings in general. It doesn’t make “ethical choices” it makes choices based on whatever benefits it’s algorithm best, whatever makes the most sense based on the data it has received, the outlines of its job, and the needs of its users. I’d even argue that it would be more dangerous if we tried to program simple systems with our morals. Program a simple AI running a factory that it’s not okay to kill people, it’s not going to understand the way we do. In what application within the factory does that rule apply? What is the definition of killing? What traits do people that shouldn’t be killed display? Going back to GPS, what warrants a more dangerous route? What is the extreme to which the definition of danger is limited to, and what is the baseline? Even with the most simple moral input into the most simple AI, you have to be able to explain in the most clear and in depth and extensive way everything that surrounds that moral, which just makes sense to an everyday human. Expecting a machine to understand a very socially and individually complex moral is just implausible. It wouldn’t make sense even at the most basic level and wouldn’t go the way any human being would think it should.
Meta_Digital t1_j29abgt wrote
Reply to comment by ShalmaneserIII in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
The whole world is integrated into capitalism, and the Southern hemisphere (other than Australia / New Zealand) has been extracted to make the Northern hemisphere (primarily Western Europe / US / Canada) wealthy.
We do have a world where people in imperial neocolonies toil in fields. If you don't know that, then you're in one of the empires using that labor for cheap (but increasingly less cheap to feed the owning class) commodities.
XiphosAletheria t1_j29a68h wrote
Reply to comment by InTheEndEntropyWins in The Witcher and the Lesser of Two Evils by ADefiniteDescription
>Isn't this essentially the Trolly problem, If a trolly was going to kill a thousand people then Geralt wouldn't pull the switch to kill one person instead.
No. That is being forced to choose between bad outcomes, not two moral evils. Choosing between two evils would be, say, choosing between supporting a trolley conductor who wanted to run over one specific person he hated and one that wanted to run through a crowd to rack up a high kill count. The correct choice would be to support neither, since both are evil people.
ShalmaneserIII t1_j299xeb wrote
Reply to comment by Meta_Digital in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Considering the rich portion is the capitalist part, this seems to be a fine support for it. Or is a world where we all toil in the fields equally somehow better?
54_actual t1_j299x1t wrote
Reply to We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
i doubt we can loose the bonds of our instincts, which have been with us since we drew on cave walls. we're tribal, territorial and aggressive by nature. men are unfaithful because they're meant to "go forth and multiply", to spread their dna far and wide, to propagate the species. societal demands such as marriage and fidelity go against our genetic grain.
we've been at war since forever, we're our own worst enemy. yes, we can solve the problems of the world, and yet, we can't.
XiphosAletheria t1_j299g6d wrote
It seems a lot of these dilemmas are only dilemmas if you believe one person can be morally responsible for another person's actions. In the case of "Jim", for example, if he kills the one person, he will be morally responsible for that person's death. But if he refuses, he will not be morally responsible for the death of the twenty - the executioner will. Nothing about Jim's refusal forces the executioner to kill, and the executioner is still free to choose not to execute anyone.
Capital_Net_6438 t1_j2994dl wrote
Reply to comment by [deleted] in /r/philosophy Open Discussion Thread | December 26, 2022 by BernardJOrtcutt
Thanks for the clarification.
Seems like the surprise quiz paradox isn’t unique in illustrating the flaws of formal logic from your perspective. So ideally one would bracket those in thinking about the SQP. Perhaps the paradox isn’t so paradoxical for independent reasons.
On the kk situation, I’m thinking maybe the student doesn’t know on day 4 that he knew on day 1. I feel totally fine resisting the inference that his knowledge has to survive the change in circumstances. Why shouldn’t it be similarly unlikely that his knowledge of knowledge survives? As the student thinks about things at the end of day 4, the argument has given little assurance that he’ll know that he knew. He should think on day 4, “huh, maybe I never knew.”
One way to think of this is that the student knows that he knows in general. If he knows p at t then he knows that he knows p at t. That’s probably an assumption the student needs. And his kk knowledge is no more guaranteed to survive the changing circumstances than his knowledge.
The59Sownd t1_j298t3q wrote
Reply to comment by Zolomite44 in We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
Makes complete sense, and as you said, the invention of the internet has absolutely exposed this part of human nature, perhaps more than ever before.
You got it! New album next year. Super psyched!
AllanfromWales1 t1_j298oyu wrote
Reply to We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
Problem 1: Agreeing what the 'world's greatest problems' are.
monsantobreath t1_j298mr4 wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
This is circular. You had a bad take and that's that.
OpeningOnion7248 t1_j2978ly wrote
Reply to We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
I forgot who said this, it might have been Wilson, I’m paraphrasing: we have primitive and reptilian emotional states; Medieval institutions; and god-like high tech like travel to Mars and science shit we can’t comprehend.
And yet we coalesce into tribes to solve problems.
Fmatosqg t1_j296lsd wrote
Reply to comment by shumpitostick in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Because ai is a tool that makes the same kind of output as people but faster. So whatever good or bad things people do on a daily basis, ai does it faster. Which means more of it over the same period of time.
Zolomite44 t1_j296g4r wrote
Reply to comment by The59Sownd in We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
Humans definitely have the "my team vs your team" mentality. Studies show that having a rival or hatred towards another group (political party, sports team, etc) it actually stimulates the same parts of the brain one would have when they achieve something purposeful in their lives.
So basically people have a slight feeling of fulfillment whenever they get to lash out or take jabs at their "opponent" so to speak. Kind of a wild phenomena, actually somewhat explains the hostility of the internet even since the 90s, we can feel satisfied telling our "enemies" they suck while never facing physical repercussions of getting punched in the mouth.
Also dig the username, Gaslight Anthem?
Fmatosqg t1_j29622s wrote
Reply to comment by AndreasRaaskov in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Thx for putting in the effort and starting such conversations. Internet is a tough place and there is value in your output before you have the experience to write a professional level article.
wandering_white_hat t1_j29qgcl wrote
Reply to comment by The59Sownd in We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
Sometimes things have to get worse before they get better. Hope that is the case now.