Recent comments in /f/philosophy
SanctusSalieri t1_j265agu wrote
Reply to comment by AStealthyPerson in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Eichmann literally organized transportation to death camps. I am not ad libbing death camps, it is the context of the discussion and the most notable feature of Nazi Germany.
SanctusSalieri t1_j26505i wrote
Reply to comment by Cruciblelfg123 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
The immediate context of the end of WWI, longstanding German and European traditions of antisemitism, the rise of an attempt to explain individual human prospects through genetics (and control populations through eugenics), Romanticism, the invention of nationalism through folkloric identification with an imagined past, pro-natalism for a select population (directly related to eugenics) and a corresponding ideology of Lebensraum, a dissatisfaction with Weimar democracy and a willingness to put faith in an outsider dictator... there are a lot of things going on with Nazism.
smariroach t1_j263x1j wrote
Reply to comment by YuGiOhippie in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
> If you cannot choose to care or not you are not really caring or not caring.
Why not? what is it that you think it means to "care" about something?
I would like it if you stop mentioning puppets because I have a hard time knowing if you're being metaphorical or if I'm expected to explain in what way a literal puppet is not like a human, it would help me understand more clearly what you mean.
>It’s not authentic because it’s forced. I'm still not sure I understand why the unavoidable nature of the feeling makes it inauthentic.
>Do you love me if i tell you with a gun to your head that you must love me? Of course not. Even if you swear YOU LOVE ME : if i forced you to say it : it is not authentic.
That's not a good analogy, because loving and saying you love are two completely different things. A better question would be: if we discovered a drug that would cause a person who takes it to fall in love with the first person they see (a classic love potion), would that person be in love if they took it, and they felt the effects? In this example, the feeling, emotion, everything, is in every way like love that the subject would have felt had they fallen in love without the drug. Are they "really" in love now or not? If not, why? how do you define "love" in a way that excludes what this person is feeling?
And what if we take it the other direction. What if you work out, becoming attractive and behave in such an impressive, kind and likable way that it makes me fall in love with you. Is if inauthentic since it's caused by what you did, therefore you having "forced" me to love you since I would not have had you not done those things?
ThorDansLaCroix t1_j262bwm wrote
Reply to comment by uncletravellingmatt in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I tried many times sue my neighbours with the help of lawers of Tenant Union and ÖRA. In both places they told me that there is nothing then can do because by law my neighbours have the right to do as much noise at night as they want.
The actually law says that there is a limit but in reality, unless it is a noise stupid loud the police can hear from the streets it is difficult to gain any case against noisy neighbours.
Because I have a neighbour who do craft work at all night just behind my wall that is not solid, I can not sleep, work, study or concentrate on anything. Before I could still survive with it by wearing always a earplug or headphones. But the excessive use of them caused me a chronic neurological problem and now I am very sensitive to noises. And when it gets too bad it actually cause me somatic pains on my ears.
But accounting to Lawers I need a friend or neighbours as witnesses bit my "friends" and neighbours don't care because it is not their wall, so it don't effect them. They assume I am just over reacting although I am in neurological therapy and I have documents from psychiatic center stating that I urgently need to move to an other apartment.
On top of that the Lawers said there is nothing they can do even if the neighbours are causing me harm and chronic illness, or making me sleep in in a park as a homeless, because the law protects their right to do whatever noise they want at night if nobody else feels effected by it.
One of the lawes really said that me being disabled is my problem because the law is made according to the majority.
So it is literally what I said earlier, that the society where I live is 100% OK with people destroying others life, health and cause literal torture to others, as long as the law allows it. They don't feel responsable for the harm that is caused to others because they don't see it as their choice but only their duty to respect the rule of law above all things. If the law allow it they see it as they not being responsable for they cause to others (since the law says so).
This alienation is so intrinsic in this society that I know a woman who lives in a building next to mine with the same problem. She also is disabled but she has chronic fatigue. When I tried to talk about us not getting help because of people putting the law order above all things, she expressed that people are right. That there is nothing they can do because it is the law. Just like me she is a victme of ableim and of people abusing of their rights, but she is educated that she is the problem for being the exception (being chronically ill). Or like the lawer told me "It is your problem". Because they are educated that the law is what keep Germany a society of order.
Cruciblelfg123 t1_j25zs2v wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I think the only think that was peculiar about the historical circumstances was “modern” technology and our sudden exponential capacity for atrocity in what was otherwise a pretty typical war with a dictator who “inspired” people in a bad place. Furthermore I don’t think there’s anything stable about human nature, I think much like the point of the post and the idea of the banality of evil, the only really stable thing is the social systems that control us
[deleted] t1_j25z199 wrote
Whiplash17488 t1_j25ytcy wrote
Reply to How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I agree with the premise and conclusion.
It already happened. Unconcious bias for facial recognition software to have a higher probability in recognizing white faces over black and brown and asian faces.
The error was in the sample data used to do machine learning.
No intentional evil was done. And the AI itself can’g be “blamed” for drawing conclusions based on what its taught. An AI can only ever conclude what it thinks is good. Just like in Arendt’s argument.
AStealthyPerson t1_j25ye8t wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I didn't see the words "death camp" in their description of how disabled people are treated in today's society anywhere. They said that our society is "Nazist" when it comes to dealing with disabled folks, which is largely correct. This user took a great deal of time to explain how they have been denied access to help by authorities in dealing with personal acts of terrorism committed by their neighbors against them. They may not have extrapolated on the situation much, and I'm sure there's more too it than what we know, but it sounds very in line with Nazi attitudes towards racist/antisemitic/homophobic/ableist "vigilantes" during the Nazi regime. Germany just had a failed right wing coup, same as the US, and it's not hard to see how there could be reactionary people in real positions of power who prevent aide and comfort from being provided to the "otherized" of society (especially at the local level). We are a society with deeply embedded hierarchies, and as economic prospects continue to worsen, these folks in charge of said hierarchies are more likely to become reactionary than progressive.
Whiplash17488 t1_j25ya7e wrote
Reply to comment by kfpswf in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I think its more that the nazi’s thought they were the good guys, genuinely rather than people doing evil for the sake of evil.
The cognitive error Arendt based it on was Eichmann’s trial in Jeruzalem. Eichmann was responsible for the orchestration of the logistics of the holocaust.
Eichmann’s values were that efficiency is good. A good work ethic is good. That’s the way to move up in the world and provide for your family. That’s the way to fit in and become homogeneous with your community.
The cognitive dissonance of the evil his actions were causing was pushed down and abstracted away on paper and numbers and quotas.
Similarly, someone might say a drone pilot pressing a button on his joystick causing children to die in collateral damage isn’t “evil”. Well it is to some. Others are just trying to do a good job.
My examples are imperfect, but the premise of her argument is that nobody is capable of assenting to a judgement they think is evil. Everyone assents to doing “good” at some level.
Her paper was intentionally controversial and was not meant as an excuse for the holocaust.
Robotbeat t1_j25x47q wrote
Reply to comment by jamesj in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
How can you prove that your experiences cannot be quantified?
Rhiishere t1_j25wn9p wrote
Reply to comment by AndreasRaaskov in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Whoah, I didn’t know that! That’s freaking amazing in a very wrong way! Like I knew something along those lines was possible in theory but I hadn’t heard of anyone actually doing it!
SanctusSalieri t1_j25wlu3 wrote
Reply to comment by Cruciblelfg123 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I don't think the Nazis were "normal stuff but more" or even an extreme version of stable "human nature" or something. They were a particular and brutal regime born out of peculiar historical circumstances.
Rhiishere t1_j25vsg3 wrote
Reply to How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
That was a very interesting article. Something that has and always will confuse me though is the drive humans have to make machines like themselves. Morality, good and evil, those are all ideas specific to the human race. Not sure why we feel the need to impose them on to AI.
uncletravellingmatt t1_j25v62i wrote
Reply to comment by ThorDansLaCroix in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
> I actually was made disabled by my neighbours and they still keep causing a lot of torture and disabling things to me.
I feel as if, once you've brought this up, you need to expand and explain what you mean by it. Was it not something you could sue over, for example?
Cruciblelfg123 t1_j25uxd5 wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
The English isn’t great but I suspect the point they are trying to make is the problems are same even though the degree is obviously much less. As in the Nazi regime is a hyperbolical manifestation of the same basic problems humans have always and still do have
SanctusSalieri t1_j25ucp3 wrote
Reply to comment by cassidymcgurk in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I don't feel that at all... then again I have degrees in history so I have had occasion to think about this a little more maybe.
cassidymcgurk t1_j25temb wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
He said it was like living in nazi germany, which i suspect a lot of us feel, wherever we are
Wild-Bedroom-57011 t1_j25q617 wrote
Reply to comment by Meta_Digital in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Sure! However AI itself also has AI specific concerns that are orthogonal to the socio-economic system under which we live in or they are created. Robert Miles on YouTube is a great entertaining and educational source for this
RyeZuul t1_j25otnr wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
It's possible to care about more than one thing at once, and it is prudent to spread the word of the potential for AI to go haywire just like releasing the Panama papers or child abuse scandals in the Catholic church. Billionaires will almost certainly start deleting columns of jobs that AI will replace while simultaneously not being very interested in AI ethical game-breaking innovative strategies and unpredictable consequences. If we want to move to a better system of systems, we need to design our overlords well from the ground up.
Meta_Digital t1_j25myuu wrote
Reply to comment by Wild-Bedroom-57011 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Indeed.
I think we could conceive of AI and automation that is a boon to humanity (as was the original intent of automation), but any form of power and control + capitalism = immoral behavior. Concern over AI is really concern over capitalism. Even the fear of an AI rebellion we see in fiction is just a technologically advanced capitalist fear of the old slave uprising.
YuGiOhippie t1_j25mm7g wrote
Reply to comment by smariroach in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
If you cannot choose to care or not you are not really caring or not caring.
A puppet can act as if it cares but it’s a meaningless act.
A puppet forced to care by the laws of causality doesn’t really care. It’s not authentic because it’s forced.
Do you love me if i tell you with a gun to your head that you must love me? Of course not. Even if you swear YOU LOVE ME : if i forced you to say it : it is not authentic.
Wild-Bedroom-57011 t1_j25m53g wrote
Reply to comment by Meta_Digital in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
But it seems that the AI alignment issue is also a big concern, too. In either case-- capitalists using AI for SUPER CAPITALISM (i.e. can do all normal capitalism things but faster, more effectively) and so the issue solely being in intent and motive, or capitalists incorrectly specifying outcomes (cutting corners to make profit) leading to misaligned AI that does really bad things, your arguments against capitalism only strengthen the concerns we have with AI
hobond t1_j25le9t wrote
Reply to comment by Pheonix7719 in /r/philosophy Open Discussion Thread | December 26, 2022 by BernardJOrtcutt
Sorry, I've just noticed that I messed up the sentence but I guess you get the point. All I'm trying to say is that society and human nature is always on view in Kant. So the "vital" is kept as an admonition. They represent a world view.
jamesj t1_j265eh8 wrote
Reply to comment by Robotbeat in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
The existence of other people's qualia can't be proven. My own experiences are self-evident to me, and I think other people's qualia are self-evident to themselves, as well.