Recent comments in /f/philosophy

Capital_Net_6438 t1_j257nt0 wrote

The teacher in the surprise quiz paradox announces on day 1 that there will be a surprise quiz this week, which has 5 days. The paradox involves an argument that purports to show the impossibility of… something. Sometimes the argument is explained as trying to show a surprise quiz is impossible. I don’t think that works for reasons I won’t belabor.

The argument could also be taken as trying to show knowledge of a surprise quiz on day 1 is impossible. So suppose for reductio that the student knows on day 1 that there will be a surprise quiz. Suppose that at the end of day 4 there has been no quiz. We assume if there’s been no quiz by a certain point, then the student knows that. So at the end of day 4, the student knows there’s been no quiz. And therefore, it would seem, he knows there’ll be a quiz on day 5. But a quiz that is known to happen on a certain day is not a surprise. Therefore the quiz can’t happen Friday.

Then you go through the same process for the other days, ultimately proving the quiz can’t happen any day. And therefore the student doesn’t know there will be a quiz.

I assume that the argument is supposed to deduce this or that. I.e. it’s not just that certain assumptions make certain consequences likely but that they follow logically.

The argument fails at the step that says the student knows on day 4 there will be quiz on day 5. It’s a rudimentary mistake. Just b/c he knows on day 1 that there’ll be a quiz it doesn’t follow that he still knows on day 4 that there will be a quiz. It’s not in general true that knowing something one day will guarantee that you continue knowing it later. There’s nothing in the argument to make one think the student’s knowledge does survive changing circumstances here.

The exercise is meant to deduce something. No principle has been presented to suppose the student’s knowledge must survive in this case. So the appropriate response is that the argument does not establish what it set out to establish since there is no reason to credit its critical inference.

But…

At the end of day 4 the student thinks back and remembers believing on day 1 that there would be a surprise quiz. We might wonder whether the student knows on day 4 that he knew on day 1 that there would be a surprise quiz.

Suppose knowledge is true belief in internal and external circumstances conducive to knowledge. The student is special. He will know something in this context iff the proposition is available to be known. The student’s internal and external circumstances on day 4 are conducive to knowing whatever he knew on day 1. So it seems the student should know on day 4 that he knew on day 1 that there would be a surprise quiz.

Knowledge that P at t entails that P is true at t. And being special, the student knows the entailments of the things he knows. So he knows that his day one knowledge entails that it was true on day 1 that there would be a surprise quiz.

It would seem that if the student knows p (that he knows on day 1 there will be a surprise quiz), knows p entails q (that his knowledge on day 1 that there will be a surprise quiz entails that it’s true on day one that there will be a surprise quiz), then he knows q (that it’s true on day one that there will be a surprise quiz).

Now we have that on day four the student knows it was true on day one that there will be a surprise quiz this week. That seems to get us back to the student having the impossible knowledge that there will be a surprise quiz on day five.

That’s where I’ve been stuck for a while. Maybe we can say there’s no guarantee that day 1 knowledge will lead to day 4 knowledge of day one knowledge.

1

Rote515 t1_j253ra2 wrote

That’s still missing the point of existentialist thought(which Camus falls under), Camus’s most important work on absurdism posits a singular question, “Should I kill myself” and argues that’s the most important philosophical question. Ethics in the face of this question are completely meaningless, as it’s a question that comes prior to questions of ethics.

Prospering, societal harm, destroying the ecosystem, none of that matters if we can’t answer the fundamental question of whether life is meaningless. That’s why greed doesn’t matter here and is irrelevant to absurdism. Negative consequences don’t matter if fundamentally life is meaningless. Absurdism is the seeking of meaning in a meaningless universe.

I have a feeling you’ve never read Camus? Or any Absurdist authors? Your making arguments, or observations that come after, which are essentially meaningless in the face of the Absurd condition.

Did you even read the article?

Edit: used a term incorrectly

2

InTheEndEntropyWins t1_j2517if wrote

I'm sure there are other definitions, but I use something like free will is about "the ability to make voluntary actions in line with your desires free from external coercion/influence".

Free will is key in morality and justice, so I like to understand how the courts define and use it. Lets use a real life example of how the Supreme Court considers free will.

​

>It is a principle of fundamental justice that only voluntary conduct – behaviour that is the product of a free will and controlled body, unhindered by external constraints – should attract the penalty and stigma of criminal liability.
>
>https://scc-csc.lexum.com/scc-csc/scc-csc/en/item/1861/index.do

In the case of R. v. Ruzic

>The accused had been coerced by an individual in Colombia to smuggle cocaine into the United States. He was told that if he did not comply, his wife and child in Colombia would be harmed.

The Supreme Court found that he didn't smuggle the cocaine of his own free will. He didn't do it in line with his desires free from external coercion. Hence he was found innocent.

​

Compare that to the average case of smuggling where someone wants to make some money and isn't coerced into doing it. If they smuggle drugs then they did it of their own "free will" and would likely be found guilty.

You can also see how the courts aren't using the libertarian definition in Powell v Texas, where they tried a defence that it wasn't of their own free will since they were an alcoholic. While this argument shows they didn't have libertarian freewill, they did have compatibilist free will, hence they were found guilty.

So even if you are a hard determinist, you would need to use this idea around coercion(that the courts call free will). Even if you don't use free will by name you would have to use the concept.

2

Slapbox t1_j24yzo4 wrote

Her actual quote:

> Good can be radical; evil can never be radical, it can only be extreme, for it possesses neither depth nor any demonic dimension yet--and this is its horror--it can spread like a fungus over the surface of the earth and lay waste the entire world. Evil comes from a failure to think. -- Hannah Arendt

86

cmustewart t1_j24yd84 wrote

Intuition just meaning my take on it, based on what I know and believe. Intuition as opposed to me having access to some sort of truth.

I disagree that its not hard to do the research and understand the ethical risks. I come from a software background, which lays some of the groundwork for research and understanding. Someone from a non-tech background with a layperson's knowledge might face a significant struggle understanding all the foundational elements underlying AI and it's ethical issues.

Someone whose life is mostly consumed by work and family life could easily never give these issues much or any thought, because it seems irrelevant to their life. In my mind, this is a serious problem. AI is changing, and will continue to change, the lives of nearly everyone in ways they are unable to see or comprehend.

4

InTheEndEntropyWins t1_j24waw6 wrote

>If I have to choose between one evil and another, then I prefer not to choose at all.”

Isn't this essentially the Trolly problem, If a trolly was going to kill a thousand people then Geralt wouldn't pull the switch to kill one person instead.

​

Also I hate the use of torture. It kind of suggest through the backdoor that torture works. It's a framing where it makes it look like torture could be morally good, but in fact it's an impossible hypothetical.

>should a political leader order the torture of a terrorist in order to find out the location of a series of bombs that will harm innocent citizens?

​

>For utilitarians (the specific targets of Williams’s critique), it doesn’t matter that Jim has to kill someone—what matters is that either twenty people will die, or one will die, and it is far better that only one dies. Williams’s point was that it clearly does matter, especially to Jim, that to secure this optimal state of affairs Jim has to kill somebody.

I'm not sure it's a valid criticism of utilitarianism. If no-one would want to live in a world where they had to kill someone then that would be taken into account into any utilitarian calculations. Although I think most people would rather someone live with the guilt of killing than having more dead people.

​

>Even if there is something noble about Geralt’s desire to avoid getting his hands dirty,

I don't think there is anything Nobel about Geralt’s position, it's just small minded and selfish.

3

Whatmeworry4 t1_j24v23v wrote

I am only referring to the intentionality to seek the consequences. True evil considers the consequences as evil and doesn’t care. The banality of evil is when you don’t consider the consequences as evil. The intent to cause the consequences is the same either way.

5

ConsciousInsurance67 t1_j24sfwe wrote

Legally and inherited from roman Rights, anything to be considered a crime needs: intentionality ( evil or not) and fault ( the wrongdoing itself that is maybe not born of evil intentions but brings pain and suffering, and therefore is bad ) example: murder ( evil- evil) v.s homicide in self defense (you kill someone but the motivation is not killing, the crime happens as a consecuence of protecting yourself . Of course it is still a crime even when the consecuences are not intentional .

I think the ethic rules for robots made by Asimov played around this; what should an AI do to protect us from ourselves?

3

who519 t1_j24qr42 wrote

Again I am just thinking of a "Sin" as something that negatively impacts our society, not as good or evil. Greed is very interesting in this regard. Greed started civilization. After all the first farmer was tired of gathering, and wanted a reliable source of food that would actually be end up being more than he needed. This success just reinforced the behavior and led the hypothetical farmer to seek power over others with his wealth and make them farm for him...and on and on and on, until we ended up where we are now. Was it wrong for the farmer to seek a reliable source of food? No, but it lead us to where we are now and if we continue on this trend, we will literally destroy our ecosystem completely.

So while not "wrong" ethically, greed inevitably leads to negative consequences for humanity. If our culture or biology had some brake on greed (some cultures have...see the Hawaiian tradition of Kapu (Taboo) as an example, maybe we would have slowed our technological advance, but prospered none the less. Instead we went with "quick and dirty" and it is now costing us dearly.

1

glass_superman t1_j24pzoq wrote

You'll not be comforted to know that the AI that everyone is talking about, ChatGPT, was funded in part by Elon Musk!

We think of AI as some crazy threat but it might as well be the first bow and arrow or AK-47 or ICBM. It's just the latest in tools that is wielded by the wealthy for whatever purpose they want. Usually to have a more efficient way do whatever it is that they were already doing. Never an attempt to modify society for the better.

And why would they? The society is already working perfectly for them. Any technology that further ingrains this is great for them! AI is going to make society more like it is already. If it makes society worse it's because society is already bad.

1

cmustewart t1_j24px5g wrote

Somewhat fair as the article was fairly blah, but I've got serious concerns that the current regimes will become much more locked into place backed by the power of scaled superhuman AI capabilities in surveillance, behavior prediction and information control.

15

Whatmeworry4 t1_j24o6bz wrote

Ok, the easiest way is to ask if the consequences were intentional, or it may even be documented. Now, why do you ask? Why do we need to detect the intent for the purposes of a theoretical discussion?

2

glass_superman t1_j24nq3v wrote

Koch Bros are not as deeply depraved as a fascist leader but they have a much wider breadth of influence. They are more dangerous than Pol Pot because what they lack in depth, they more than make up for in breadth.

4