Recent comments in /f/philosophy

answermethis0816 t1_j8rukxr wrote

I agree, but I think the difference between the professional and amateur philosopher in that assessment is how they define free will. Professional philosophers who are compatibilists are using a more narrow, very specific definition of free will, while the amateur determinist is using the broader colloquial definition.

1

dbx999 t1_j8rt8bs wrote

Randomness should be considered as deterministic. The flip of a coin over time reveals the deterministic nature of randomness, falling in line with the elegant orderly pattern of 50% heads and 50% tails. Chaos and uncertainty turns into order and predictable over an aggregate.

Your life is one toss of one coin.

So when you land one way, you will feel as if you chose the side to land on. But it is all the forces acting on you from outside of you that determined that outcome. And if you pull back that perspective to a population of 8 billion other humans, the predictable order that humans follow the same rules as flipped coins and viruses becomes evident.

We may be sentient but we may only be witnesses to our own existence. Passenger of my soul, eyewitness of my fate. Not master or captain.

8

InTheEndEntropyWins t1_j8rrx36 wrote

>This doesn’t seem like a logical argument to me. It seems like you’re just saying humans tend to believe we have free will, and our society is based upon that assumption.

I'm saying that humans use the compatibilist definition of free will. Hence it makes sense to talk about compatibilist free will rather than libertarian free will.

I'm saying it's illogical to use the incoherent concept of libertarian free will.

>Where would we draw the line between free will and compulsion?

It would depend on the facts and I like to look at the legal system, which does this all the time.

In cases like R. v. Ruzic, they looked at the facts and determined they were coerced and hence didn't have free will.

In the case of Powell v Texas, where they tried a defence that it wasn't of their own free will since they were an alcoholic. While this argument shows they didn't have libertarian freewill. The courts didn't accept this argument and it was found they did have free will. So they did distinguish between free will and compulsion in this case.

>It has to be arbitrary

Just like pretty much every high level concept. Even the concept of "life" is arbitrary with many blurred lines. But just because the concept of life is arbitrary doesn't mean it isn't useful or that we can't apply in the context of humans.

>, just like you noted about a robot’s desires. An automaton desires nothing other than following its programming, so anything a robot does successfully would be an exercise of free will. But I don’t think anybody would actually argue that, they’d argue it’s an exercise of the programmer’s free will. Why is it different for us just because our programming isn’t apparent?

​

>Why is it different for us just because our programming isn’t apparent?

Maybe that's the main difference. We aren't programmed with a clear simple goal of killing someone, whereas the robot was.

If you change the example of just making the angry and violent, then if the robot following these goals kills someone, I think it is fairly similar to the human case.

3

InTheEndEntropyWins t1_j8rpo60 wrote

Here are some links and studies around researchers effecting people's level of free will belief.

Turns out that convincing people that they don't have free will is bad.

​

>These three studies suggest that endorsement of the belief in free will can lead to decreased ethnic/racial prejudice compared to denial of the belief in free will. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0091572#s1>
>
>
>
>For example, weakening free will belief led participants to behave less morally and responsibly (Baumeister et al., 2009; Protzko et al., 2016; Vohs & Schooler, 2008)
>
>From https://www.ethicalpsychology.com/search?q=free+will

​

>these results provide a potential explanation for the strength and prevalence of belief in free will: It is functional for holding others morally responsible and facilitates justifiably punishing harmful members of society. https://www.academia.edu/15691341/Free_to_punish_A_motivated_account_of_free_will_belief?utm_content=buffercd36e&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

​

>A study suggests that when people are encouraged to believe their behavior is predetermined — by genes or by environment — they may be more likely to cheat. The report, in the January issue of Psychological Science, describes two studies by Kathleen D. Vohs of the University of Minnesota and Jonathan W. Schooler of the University of British Columbia.
>
>From https://www.nytimes.com/2008/02/19/health/19beha.html?scp=5&sq=psychology%20jonathan%20schooler&st=cse

8

threedecisions t1_j8rpkg4 wrote

The belief in Free Will is like an algorythm sent to a robot because it is encourages social order. It puts parameters on the robot's behaviors by telling it nasty things will happen to it if it acts outside of them.

The limitations on the idea is when the robot is unable to comply with prescriptions given to it and is punished rather than assisted.

4

Confident-Broccoli-5 t1_j8rpaxr wrote

> “We” are our brain.

Maybe not -

> Mereology is the logic of part/whole relations. The neuroscientists’ mistake of ascribing to the constituent parts of an animal attributes that logically apply only to the whole animal we shall call ‘the mereological fallacy’ in neuroscience.

> The principle that psychological predicates which apply only to human beings (or other animals) as wholes cannot intelligibly be applied to their parts, such as the brain, we shall call ‘the mereological principle’ in neuroscience.

> Human beings, but not their brains, can be said to be thoughtful or to be thoughtless; animals, but not their brains, let alone the hemispheres of their brains, can be said to see, hear, smell and taste things; people, but not their brains, can be said to make decisions or to be indecisive.

So the basic idea of the mereological fallacy (which is what the author may be committing) is claiming parts are responsible for something only the whole they are a part of can be responsible for. In neuroscience, the brain is the particular part that gets ascribed characteristics only the whole body or person can be responsible for.

I think it's easiest to deal with from the first person, since figuring out whether other people are doing things involves a variety of concerns about inferring mental activity from observed bodily behavior - though the issue is still pertinent there.

Let's just jump into some issues that just the basic first personal claim that "I think my brain is thinking" gives rise to. These are just questions to ask yourself for the sake of figuring out how to make sense of a person, a brain, and their status as either part or whole and how they can relate in a way that makes sense.

  • Is my brain equivalent to my thinking, my activity in general, myself? How could it even still be just one part of a person's body if it's a whole person? Is it both a part and a whole, somehow? In what respects, such that this wouldn't be a contradiction?

  • If I just were a brain, would I be part of a body that isn't my own? What of the other body parts, are they part of me or are they just sort of tools for me as a brain? How would sensation even work if that's all they are? My eyes and my brain are both important for a whole person to see colors on a theory that considers them parts, but if the brain is the whole person how does the affectation of the eye result in a brain's experience of color?

  • What happens as a brain changes? If a brain is a body part of a whole person, that person can stay the same as experiencing subject as their brain develops, changes, etc. The person as a whole accounts for the unity of the body and the experiences resulting from its changes all being a process of a single subject. But if I am my brain, wouldn't I just cease to be when my brain changes, and some other brain-person would pop into being upon the instantiation of the new brain structure?

  • What determines the limit of a body or body part? Why do we decide the brain material stops here, the eye material stops here, the whole person's body stops here, etc.? Why is it not just an arbitrarily selected aggregate of atomistic pieces of stuff any way you slice it?

Minimally, we can say the mereological fallacy is a criticism of a way of treating these kinds of questions that some philosophers believe does not make sense.

1

InTheEndEntropyWins t1_j8roxdi wrote

>I think in the context of free will discussion, voluntary action isn’t the same as free will.

I didn't say it was the same.

>Someone who kidnaps because they have the goal of making money versus someone who kidnaps because they have the goal of surviving against the person who ordered them at gun point to kidnap have very different degrees of voluntary action. The causes of their doing the kidnapping say something about the person’s propensity for voluntarily engaging in anti-social behavior.

Even if you don't use the word "free will", you are using the concept to distinguish between these two situations. So I'm not really sure of your point.

You accept that there is a difference between the situations. Do you also accept the legal system and most people would use the term free will in that context?

2

frogandbanjo t1_j8roa0v wrote

It's no more significant to me that the brain handles different "choices" in different ways than the fact that my fingers move differently depending on the task.

>“We” are our brain.

Okay, but what about global supervenience across both time and space? What are we not, if we focus on integrated systems of cause and effect? That's the more important question. "I am a very special cog" does not negate "I am part of a larger machine and my movements are dictated by all of those other parts, plus energy that originated long ago."

I hardly think we need to debate whether the concept of free will is useful. The overwhelming majority of our moral and legal systems depend on the premise that it exists, and anyone who tries to prod at that premise gets shut down very quickly by nervous, fearful, vengeful, and outraged people. It's certainly useful to some people and to some ends. The same could be said, however, for any opiate of the masses or tool of the oppressors.

3

BroadShoulderedBeast t1_j8rks0a wrote

I think in the context of free will discussion, voluntary action isn’t the same as free will. Even a robot can have a goal to do a thing as a matter of its pre-programming, but if another thing interrupts that action and the robot is made to do something different, it is no longer totally voluntary. The robot had a plan of action but had to change that plan because of circumstances outside of its control. Free will is not required for voluntary action.

Someone who kidnaps because they have the goal of making money versus someone who kidnaps because they have the goal of surviving against the person who ordered them at gun point to kidnap have very different degrees of voluntary action. The causes of their doing the kidnapping say something about the person’s propensity for voluntarily engaging in anti-social behavior.

1

HippyHitman t1_j8rg9mj wrote

This doesn’t seem like a logical argument to me. It seems like you’re just saying humans tend to believe we have free will, and our society is based upon that assumption.

I’m arguing that the assumption is incorrect.

Where would we draw the line between free will and compulsion? It has to be arbitrary, just like you noted about a robot’s desires. An automaton desires nothing other than following its programming, so anything a robot does successfully would be an exercise of free will. But I don’t think anybody would actually argue that, they’d argue it’s an exercise of the programmer’s free will. Why is it different for us just because our programming isn’t apparent?

6

DasAllerletzte t1_j8rfmqp wrote

Of course it will.

In the end, it was I who entered the information.
And if you’d enter yours, it would decide accordingly.

Nothing is perfect or unique.
It is (for what we know) impossible for two objects to carry the same information.
And through the even greater imperfection of human beings, two of them will neither receive nor evaluate data equally.
Thus, there are infinitely many combinations of presets for that computer.

3

HippyHitman t1_j8rf9hn wrote

>I’d say, you can adapt.

Sure, but what about a machine that can alter its own programming? If it’s not acting with free will when it adapts, then those adaptations aren’t free will.

>And also consider non-measurable phenomena like other peoples feelings or reactions.

They may not be measurable, but they can be observed and estimated. That’s how you do it, after all.

>You can prioritize.

This one machines are already great at. Probably better than us. The amount of prioritization that happens every microsecond in order to make modern computers run would fry our brains.

>Such decisions would require a ton of code engineering to implement.

Sure, and that’s my argument. We’re just extremely complex machines, so the reasoning is obfuscated to the point that it gives the illusion of free will. But if we could actually analyze our minds and thought mechanisms I don’t see why it would be any different from a computer program, and I don’t see where there’s room for free will.

8

InTheEndEntropyWins t1_j8rf5pd wrote

>Legality doesn’t imply truth.

I just refer to the legal system since they have good high quality analysis of free will which matches up to most people's intuitions around free will. It also lines up with what most philosophers think.

>Let’s compare two scenarios: in one you program a robot to kill someone,

Not sure here, how do you define a robot's desires?

If we switch it out to be a person, and say they have the genetics and upbringing to make them a violent killer. If they had the desire to kill someone and voluntary acted on that then it would be of their own free will.

> in the other you program a robot to cut people’s hair but it has a horrible malfunction and kills someone.

Well that's not in line with their desires and isn't a voluntary action, so wouldn't be of it's free will.

>If you agree that humans are essentially no different from robots, then it follows that we can’t have free will regardless of what any court or law says.

Sounds like you are talking about libertarian free will, and sure people don't have libertarian free will, but that doesn't matter since most people are really talking about compatibilist intuitions, which we do have.

What people really mean by free will is the same thing the courts are talking about. They aren't talking about the libertarian free will you are using.

2

DasAllerletzte t1_j8rea3z wrote

I’d say, you can adapt.
And also consider non-measurable phenomena like other peoples feelings or reactions.
You can prioritize.

Recently I wanted to get §thing.
Then I started to weigh wether I truly need §thing and if I can afford it too.

Such decisions would require a ton of code engineering to implement.

2