Recent comments in /f/philosophy
SanctusSalieri t1_j25kxnw wrote
Reply to comment by ThorDansLaCroix in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Your comment is actually still public, so it's bold to contradict what you just wrote.
pokoponcho t1_j25kqrv wrote
Reply to comment by InTheEndEntropyWins in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
Thank you for the detailed answer.
Not only do we use different definitions of free will but also different approaches to the subject.
You are talking about the usefulness and practicality of the concept of free will for society. My original comment had nothing to do with that.
In any case, thanks for your time. I learned new things.
Meta_Digital t1_j25klmm wrote
Reply to comment by Wild-Bedroom-57011 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Yes; capitalism is the first system that seems poised to lead to human extinction if we don't choose to overcome it rather than reacting after it does its damage and self destructs.
The AI the author is referring to is either what we have today, which is just old mechanical automation, or the AI that is imagined to have intelligence. Either way, it's the motives of the creators of those systems that are the core problem of those systems.
Wild-Bedroom-57011 t1_j25k9vs wrote
Reply to comment by shumpitostick in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Because of how foreign AI can be. In the space of all human brains and worldviews, there is insane variation. But beyond this, in the space of all minds evolution can create, and all minds that could ever exist, a random, generally intelligent and capable AI could be the paradigmatic example of completely banal evil as it kills us all.
Wild-Bedroom-57011 t1_j25k0ew wrote
Reply to comment by Meta_Digital in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Is capitalism the most evil thing humanity has dealt with? More than feudalism, slavery, etc?
Further, AI isn't really imaginary-- at worst the author is trying to pre-empt and avoid an issue that is less likely to come to pass
ThorDansLaCroix t1_j25jrba wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I never compared what I am experiencing with Nazist death camps.
SanctusSalieri t1_j25j4ad wrote
Reply to comment by ThorDansLaCroix in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Nothing you are going through is at all similar to Nazi death camps, it's extremely insensitive to suggest it is. Have some perspective.
Edit: just saw your edit. Yeah. I've read Eichmann in Jerusalem. That's the whole premise of this discussion.
ThorDansLaCroix t1_j25iugv wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I read most works of Hannah Arendt and know very well the Banality of Evil concept and how she developed its thinking.
The nazist defence I mentioned is called the Eichmanm Trial. I suggest you to look for this title written by Hannah Arendt. Or Eichmann in Jerusalem.
SanctusSalieri t1_j25inxn wrote
Reply to comment by Whatmeworry4 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
It's also important that Eichmann was a lying sack of shit mounting a desperate legal defense and certainly participated willingly in everything he did and shared the Nazi ideology.
Saladcitypig t1_j25ii5g wrote
Reply to How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Every single machine is made by a human. That can never be separated, and we need to stop pretending that we are at the stage where human hubris is not always a huge factor.
So if you want to understand computers, understand the scope of human mistakes.
SanctusSalieri t1_j25igrh wrote
Reply to comment by ThorDansLaCroix in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Taking a surface reading of Arendt's idea of the banality evil (which isn't clearly the best description of all behavior we can call evil) and extrapolating the idea that Nazism was banal and equivalent to your trouble getting recognition and all the services you might want as a disabled person in a rich country is actually kind of insane.
Pheonix7719 t1_j25i11b wrote
Reply to comment by hobond in /r/philosophy Open Discussion Thread | December 26, 2022 by BernardJOrtcutt
Snitching is telling truth, yes Kant does say that an action can only be righteous when it can be used as law governing people by satisfaction of the majority for its implementation however that doesn't nullify argument that you can lie, as honesty is given as vital.
Indigo_Sunset t1_j25hlk9 wrote
Reply to comment by AndreasRaaskov in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
If the goal is to morals gate ANI, then the process is limited to the rule construction methodology of instruction writers. This would be the banality of evil within such a system, culpability. It's furthered by the apathy of iteration where a narrowed optimization ai obfuscates instruction sets to greyscale through black box, thereby enabling a loss of complete understanding while denying culpability as 'wasn't me' while pointing at a blackish box they built themselves.
In the case of facebook, the obviousness of the effect has no bearing. It has virtually no consequence without a culpability the current justice system is capable of attending to. Whether due to a lack of applicable laws, or the adver$arial nature of the system, or the expectation of 'free market' corrections by 'rational people', the end product is highly representative of the banality that has no impetus to change.
hobond t1_j25hibm wrote
Reply to comment by Pheonix7719 in /r/philosophy Open Discussion Thread | December 26, 2022 by BernardJOrtcutt
I think you miss a point in Kant's ethics. Snitching in that position is seen to not be considerable by majority of people. Kant acknowledges that and insists we shouldn't be proud of not snitching and accept that we lied. Basically we should stay humble and know that the ideas are definitely not less important than the actions.
RipperNash t1_j25gpkq wrote
Reply to How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Machine Learning Institute has a paper on 'Coherent Extrapolated Volition' which basically explains unintended consequences of human requests with limited knowledge given to an AI.
MeetInPotatoes t1_j25fvxp wrote
>Geralt insulates himself from what happens in the world. He shows that, to some extent, he doesn’t care what happens to people, as long as he isn’t involved.
Interesting article but I take issue with this paragraph in particular. The "doesn't want to get his hands dirty" angle is overreach. He chooses to take a more humble approach to his own judgment. He knows what is evil and what is not, but to judge between two evils requires more than knowing whether something is wrong or right..a feeling that I think most of us feel "in the pit of our stomach." It requires a belief that one is right about the matter of degrees. The instinct of wrong or right is the feeling he trusts, and the lore is big on communicating that Witchers are highly instinctual. But comparing two evils no longer involves that gut instinct but is instead a heady affair where bias can roam more freely. That he prefers not to choose does not say that he won't if he is forced either.
Lesser or greater evil is a matter for society to decide. He detects evil in a binary, instinctual way and won't pretend otherwise.
ThorDansLaCroix t1_j25f7qw wrote
Reply to comment by Slapbox in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
She realised it when watching the defense speach of a Nazist in a court, saying that he doesn't feel responsable for sending millions of Jewish to Death because he did it not as his intention, he did it because his duty was to follow orders of his superiors.
Hannah Arendt then said that when we don't feel responsable for what we do, because of hierarchy duties or because of law obedience, then we don't feel responsable for the consequence of our actions. So evil is banalised.
It is important to remember that during Nazist regime killing most people didn't care about it, either in Germany or abroad. The war was not about. It was only after the war that the mass killing was used as propaganda by the winners about saving its victmes from the Evil regime. And even up today it is most about the Jewish while you rarely see any mention about the mass killing of disabled people, mental ill, etc.
The reason I mention it is because I am disabled in Germany and my experience as disable feels the same as living in a Nazist society back then. I actually was made disabled by my neighbours and they still keep causing a lot of torture and disabling things to me. But whenever I look for help from friends and authorities they don't show to care at all. They all mention the law that about my neighbours having the right to do what they do regardless the consequences it causes to me. And they think like this because they are doctrinated that society must respect the laws and authority orders for the orders sake. Even if it means sacrificing people, because otherwise it means corrupting the rule of order. And this is exactly the reason so much evil was being accepted and banalised in Nazist regime. And the reason the Nazist Hannah Aren't was watching in court justified him sending millions to death and not feeling guilty or responsable.
When we look at society today, people didn't change. It is exactly as it was at the time of Nazist regime. Although there are many antifas in my neighbourhood claiming they protect the minorities from nazists, they are just as the same as the people who didn't care about disabled people being sent to death when I look for help. They tell me what the nazists said back then, that it is the law and the law is above of all things, because of order sake.
AndreasRaaskov OP t1_j25f5g8 wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
The article doesn't mention it but talking about economics is differently also part of AI ethics.
AI ethics helps you understand the power Elon musk gets if he tweaks the twitter algorithm to promote posts he likes and shadow-ban posts he dislikes.
And the Koch bother was deeply involved in the Cambridge Analytical scandal where machine learning was used to manipulate voter behaviour in order to get Trump elected. Even with Cambridge Analytical gone, roomers still go that Charles Koch and a handful of his billionaire friends are training new models to manipulate future elections.
So even if evil billionaires are all you care about you should still care about AI ethics since it also includes how to protect society from people who use AI for evil.
shumpitostick t1_j25dlxl wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
The main problem in AI ethics is called "the alignment problem". But it's exactly the same concept that appears in economics as a market failure called "agent-principal problem". We put people in charge and make them act on our behalf, but their incentives (objective function) are different than ours. The discussion in AI ethics would benefit greatly by borrowing from economics research.
My point is, we already have overlords who don't want the same things as us and it's already a big problem. Why should AI be worse?
YuGiOhippie t1_j25dkaw wrote
Reply to comment by smariroach in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
Puppets cannot care. That’s the point.
Their care, their meaning, is a fake if it’s pre-determined. It is not authentic. It doesn’t arise from choice only necessity if we are puppets.
SchonoKe t1_j25ddq3 wrote
Reply to comment by Whatmeworry4 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
The book in its entirety is closer to what you said than that quote.
The book talks about how Eichmann knew full well what he was doing and what was happening (he once even used his position to “save” some from the camps by broker a quid pro quo deal, IIRC managed to forget this fact during his trial because it was such a minor event to him personally) but cared far more about his career and doing his job well as it was assigned rather than doing the right thing.
[deleted] t1_j25ch8p wrote
Reply to comment by [deleted] in Life is a game we play without ever knowing the rules: Camus, absurdist fiction, and the paradoxes of existence. by IAI_Admin
[removed]
AndreasRaaskov OP t1_j25buis wrote
Reply to comment by bildramer in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Honestly, this was my main motivation for writing this article, as an engineer I wanted to know what philosophers thought of AI ethics, but every time I tried to look for it, I only found people talking about superintelligence or Artificial general intelligence (AGI) will kill us all.
As someone with an engineering mindset I am not really that interested in AGI may and may not exist one day unless you know a way to build one. What really interests me though is building an understanding of how the Artificial Narrow Intelligence (ANI) that does exist is currently hurting people.
To be even more specific I wrote about how the Instagram recommendation system may purposefully make teen girls depressed and I wanted to expand on that theory.
https://medium.com/@andreasrmadsen/instagram-influence-and-depression-bc155287a7b7
I do understand that talking about how some people may be hurt by ANI today is disappointing if you expected another, WE ARE ALL GOING TO DIE by AGI article. Yet I find the first problem far more pressing and I really wish that more people in philosophy would focus on applying their knowledge to the philosophical problems that other fields are struggling with instead of only looking at problems far in the future that may never exist.
Capital_Net_6438 t1_j2597qc wrote
Reply to comment by hecaton_atlas in /r/philosophy Open Discussion Thread | December 26, 2022 by BernardJOrtcutt
Philosophy is the investigation of fundamental aspects of reality.
smariroach t1_j25ldfo wrote
Reply to comment by YuGiOhippie in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
Puppets cannot care, but we're not literal puppets. We can care, and have meaning.
Saying that caring is fake if it's pre-determined is not self evident.
All it means is that whether we care is dependent on what it is we care (or don't care) about and who we are. We could not be other than we are, and the things we form opinions about could not be other than they are, and therefore we will care (or not).
Why is the care only authentic if we can break the laws of causality? What does your use of "authentic" and "fake" mean in this context?