Recent comments in /f/philosophy

fitzroy95 t1_j29qejz wrote

Problem 2: being able to discuss them honestly without being drowned out and silenced by propaganda and misinformation from the rich and powerful, who are far more interested in amassing more wealth and power than actually addressing any issues.

113

SanctusSalieri t1_j29dyp4 wrote

Yeah, you asked the same question and the answer has not changed. What do you expect? There's no bad take in saying that death camps are relevant to any discussion of Eichmann and the most notable feature of the Nazi regime. I genuinely don't understand what your issue is, your entire behavior here is inscrutable.

1

Rhiishere t1_j29aun0 wrote

Well there’s some here that I agree with and some I don’t. You can’t program an AI to have the same moral code as humans do. At the end of the day it’s a machine based on logic, and if our morals don’t align with that logic than nobody wins. For GPS, it’s the same thing, you say it makes ethical choices unawares, but those are what are ethical to you and other human beings in general. It doesn’t make “ethical choices” it makes choices based on whatever benefits it’s algorithm best, whatever makes the most sense based on the data it has received, the outlines of its job, and the needs of its users. I’d even argue that it would be more dangerous if we tried to program simple systems with our morals. Program a simple AI running a factory that it’s not okay to kill people, it’s not going to understand the way we do. In what application within the factory does that rule apply? What is the definition of killing? What traits do people that shouldn’t be killed display? Going back to GPS, what warrants a more dangerous route? What is the extreme to which the definition of danger is limited to, and what is the baseline? Even with the most simple moral input into the most simple AI, you have to be able to explain in the most clear and in depth and extensive way everything that surrounds that moral, which just makes sense to an everyday human. Expecting a machine to understand a very socially and individually complex moral is just implausible. It wouldn’t make sense even at the most basic level and wouldn’t go the way any human being would think it should.

2

Meta_Digital t1_j29abgt wrote

The whole world is integrated into capitalism, and the Southern hemisphere (other than Australia / New Zealand) has been extracted to make the Northern hemisphere (primarily Western Europe / US / Canada) wealthy.

We do have a world where people in imperial neocolonies toil in fields. If you don't know that, then you're in one of the empires using that labor for cheap (but increasingly less cheap to feed the owning class) commodities.

2

XiphosAletheria t1_j29a68h wrote

>Isn't this essentially the Trolly problem, If a trolly was going to kill a thousand people then Geralt wouldn't pull the switch to kill one person instead.

No. That is being forced to choose between bad outcomes, not two moral evils. Choosing between two evils would be, say, choosing between supporting a trolley conductor who wanted to run over one specific person he hated and one that wanted to run through a crowd to rack up a high kill count. The correct choice would be to support neither, since both are evil people.

1

54_actual t1_j299x1t wrote

i doubt we can loose the bonds of our instincts, which have been with us since we drew on cave walls. we're tribal, territorial and aggressive by nature. men are unfaithful because they're meant to "go forth and multiply", to spread their dna far and wide, to propagate the species. societal demands such as marriage and fidelity go against our genetic grain.

we've been at war since forever, we're our own worst enemy. yes, we can solve the problems of the world, and yet, we can't.

0

XiphosAletheria t1_j299g6d wrote

It seems a lot of these dilemmas are only dilemmas if you believe one person can be morally responsible for another person's actions. In the case of "Jim", for example, if he kills the one person, he will be morally responsible for that person's death. But if he refuses, he will not be morally responsible for the death of the twenty - the executioner will. Nothing about Jim's refusal forces the executioner to kill, and the executioner is still free to choose not to execute anyone.

1

Capital_Net_6438 t1_j2994dl wrote

Thanks for the clarification.

Seems like the surprise quiz paradox isn’t unique in illustrating the flaws of formal logic from your perspective. So ideally one would bracket those in thinking about the SQP. Perhaps the paradox isn’t so paradoxical for independent reasons.

On the kk situation, I’m thinking maybe the student doesn’t know on day 4 that he knew on day 1. I feel totally fine resisting the inference that his knowledge has to survive the change in circumstances. Why shouldn’t it be similarly unlikely that his knowledge of knowledge survives? As the student thinks about things at the end of day 4, the argument has given little assurance that he’ll know that he knew. He should think on day 4, “huh, maybe I never knew.”

One way to think of this is that the student knows that he knows in general. If he knows p at t then he knows that he knows p at t. That’s probably an assumption the student needs. And his kk knowledge is no more guaranteed to survive the changing circumstances than his knowledge.

1

OpeningOnion7248 t1_j2978ly wrote

I forgot who said this, it might have been Wilson, I’m paraphrasing: we have primitive and reptilian emotional states; Medieval institutions; and god-like high tech like travel to Mars and science shit we can’t comprehend.

And yet we coalesce into tribes to solve problems.

25

Fmatosqg t1_j296lsd wrote

Because ai is a tool that makes the same kind of output as people but faster. So whatever good or bad things people do on a daily basis, ai does it faster. Which means more of it over the same period of time.

1

Zolomite44 t1_j296g4r wrote

Humans definitely have the "my team vs your team" mentality. Studies show that having a rival or hatred towards another group (political party, sports team, etc) it actually stimulates the same parts of the brain one would have when they achieve something purposeful in their lives.

So basically people have a slight feeling of fulfillment whenever they get to lash out or take jabs at their "opponent" so to speak. Kind of a wild phenomena, actually somewhat explains the hostility of the internet even since the 90s, we can feel satisfied telling our "enemies" they suck while never facing physical repercussions of getting punched in the mouth.

Also dig the username, Gaslight Anthem?

65

Fmatosqg t1_j29622s wrote

Thx for putting in the effort and starting such conversations. Internet is a tough place and there is value in your output before you have the experience to write a professional level article.

3