Recent comments in /f/philosophy
thewimsey t1_j26m0pd wrote
Reply to comment by Whiplash17488 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
That's not Arendt's actual argument.
Eichmann knew what he was doing, and that it was bad. He just didn't care because he subordinated that to other goals.
An algorithm can't be evil. It can just be a bad algorithm. You might as well say that a sticking speedometer that causes you to speed is evil.
SanctusSalieri t1_j26lv0f wrote
Reply to comment by Cruciblelfg123 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I said the exact opposite of "set apart from history." I offered some of the particular historical conditions that allow us to understand the events. By generalizing between situations as diverse as Nazi Germany and 21st Century Europe or America you misunderstand both -- and misunderstanding the present is quite serious because we might want to do something to change it.
ShalmaneserIII t1_j26ldc0 wrote
Reply to comment by Meta_Digital in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Automation is great. Without it, we'd still be making everything by hand and we'd have very few manufactured goods as a result, and those would be expensive.
So if you don't want endless growth, how do you suggest dealing with people who want more tomorrow than they have today?
[deleted] t1_j26l8y6 wrote
Meta_Digital t1_j26l7dy wrote
Reply to comment by thewimsey in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I don't know how to respond to this because it's clear it would be an uneven conversation. You're missing very basic required knowledge here. Inequality, for instance, is at its highest point in recorded history. Capitalism is a form of authoritarianism. Economic conflict turns into military conflict which increases the risk of nuclear war. Capitalism is not human nature; it's actually pretty recent and radically different from its precursors in several important ways. I have no idea what you're even talking about regarding communism or how it's even relevant.
glass_superman t1_j26l6c5 wrote
Reply to comment by cmustewart in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
That's totally what is going to happen. Looks at international borders. As nuclear weapons and ICBMs have proliferated, we find the nation borders are now basically permanent. Before WWII shit was moving around all the time.
AI will similarly cement the classes. We might as well have a caste system.
cassidymcgurk t1_j26l5sn wrote
Reply to comment by cassidymcgurk in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Upper Second, Southampton University, graduated 1988, maybe Im just older
Cruciblelfg123 t1_j26kt1d wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I don’t see how I’m posing a moral question, and I definitely don’t think I’m asking even a particularly hard or loaded question. You said these events are set apart from history, I asked how because they seem at least on the surface quite typical if very extreme, you said historians decide what’s extreme (which is a non answer and an ad hominem response to be clear), and again I asked the same and your response was to tell me exactly how historians go about categorizing events. If you understand how they go about this and the synthesized explanations and comparative studies that have went into the topic then it shouldn’t be that hard to give me at the least an ELI5 about what exactly separates these things from the other seemingly similar events in history. You said these events are unique and I’m literally just asking why
Meta_Digital t1_j26krzm wrote
Reply to comment by ShalmaneserIII in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Well, the fundamental problem with capitalism is that it just doesn't work. Not in the long run. Infinite exponential growth is a problem, especially as an economic system. Eventually, in order to maintain that growth, you have to sacrifice all morality. In the end, you have to sacrifice life itself if you wish to maintain it. Look at the promises vs. the consequences of automation for a great example of how capitalism, as a system and an ideology, ruins everything it touches. You don't need forced altruism to have some decency in the world; you just need a system that doesn't go out of its way to eliminate every possible hint of altruism in the world to feed its endless hunger.
cassidymcgurk t1_j26krzd wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I have a degree in history as well
thewimsey t1_j26kdvy wrote
Reply to comment by Meta_Digital in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
>when capitalism is right there being the most banal and most evil thing humanity has yet to contend with
This is ridiculous.
>Angry rate downs despite rising inequality, authoritarianism, climate change, and the threat of nuclear war - all at once.
It's because you don't seem to know anything about history, where inequality was much worse, authoritarianism involved dictators, actual fascists, and a much much greater threat of nuclear war.
I'm not sure why you want to blame climate change on capitalism rather than on, oh, humanity. Capitalism is extremely green compared to the ecological disasters created every day by communism.
ammonium_bot t1_j26k8bq wrote
Reply to comment by Funoichi in Life is a game we play without ever knowing the rules: Camus, absurdist fiction, and the paradoxes of existence. by IAI_Admin
> and payed in
Did you mean to say "paid"?
Explanation: Payed means to seal something with wax, while paid means to give money.
Total mistakes found: 211
^^I'm ^^a ^^bot ^^that ^^corrects ^^grammar/spelling ^^mistakes.
^^PM ^^me ^^if ^^I'm ^^wrong ^^or ^^if ^^you ^^have ^^any ^^suggestions.
^^Github
^^Patreon
ShalmaneserIII t1_j26k7iw wrote
Reply to comment by Meta_Digital in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
So if the problem with both capitalism and AI is that the people who create them use them for their own ends and motives, is your problem simply that people want something other than some general good for all humanity? Is your alternative forced altruism or something like it?
SanctusSalieri t1_j26izz4 wrote
Reply to comment by Cruciblelfg123 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Historians tend to historicize. That means first treating particular events using an empirical method and understanding them on their own merits. Then synthesizing explanations, comparative studies, and so on. They do this because it's the best way to do history. Generally they would avoid the morally loaded and aggrieved tone you're taking. Saying something is peculiar and particular doesn't preclude comparison, and it is not anjudgment of gravity, seriousness, or worthiness of study.
SanctusSalieri t1_j26imzh wrote
Reply to comment by monsantobreath in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Imagine not understanding what "most notable" means.
monsantobreath t1_j26ij3q wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
>and the most notable feature of Nazi Germany.
And that's worst thing about our perception of Nazism. As if unless you're engineering such industrial murder there's no right to discuss its qualities as they are found outside the third Reich.
So much happened before the final solution.
Cruciblelfg123 t1_j26hvzb wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
What makes these extremist views exceptional compared to all the other extreme xenophobic desperate supremacist ideas through all of history, to the point where they couldn’t even be considered much worse but similar but apparently completely unique compared to everything we’ve done naturally up until that point
kfpswf t1_j26gwv2 wrote
Reply to comment by Whiplash17488 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
>I think its more that the nazi’s thought they were the good guys, genuinely rather than people doing evil for the sake of evil.
Yes, that's the 'warped perception' I was referring to. It was a worldview of a very insecure, power-drunk Hitler that became their guiding light.
>My examples are imperfect, but the premise of her argument is that nobody is capable of assenting to a judgement they think is evil. Everyone assents to doing “good” at some level.
Your example are great actually. Yes, as long as you can brainwash people into believing they're doing good, and we know how easy it is to do so, people will continue to commit evil rather enthusiastically.
>Her paper was intentionally controversial and was not meant as an excuse for the holocaust.
It may not have focused on the overall evil of the holocaust, but the general mechanism is the same. You adopt a flawed or limited worldview, and then commit evil in the name of your greater good.
robothistorian t1_j26dpqc wrote
Reply to comment by AndreasRaaskov in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
>as an engineer I wanted to know what philosophers thought of AI ethics, but every time I tried to look for it, I only found people talking about superintelligence or Artificial general intelligence (AGI) will kill us all.
I'm afraid, in that case you are either not looking hard enough or are looking at the wrong places.
I would recommend you begin by looking into the domain of "technology/computer and ethics". So, for example, you will find a plethora of works collected under various titles such as Value Sensitive Design, Machine Ethics etc.
That being said, it may also be helpful to clarify some elements of your article, which are a bit disturbing.
First, you invoke the Shoah and then focus on Arendt's work in that regard. But, with specific reference to your own situation, the more relevant reference would have been to Aktion T4 of the Nazis (This is an article that lays out how and where the program began). As is well known, the rationale underlying that mass murder system (and it was a "system") was grounded, specifically, on eugenics, and, more abstractly, on the notion of an "idealized human". The Shoah, on the other hand, was grounded on a racial principle according to which any race considered to be "non-Aryan" was a valid target of a racial cleaning program, which resulted in the Shoah. It is important to be conceptually clear about these two distinct operative concepts: The T4 program was one of mass murder; the Shoah was an act of genocide. One may not immediately appreciate the difference, but let me assure you, the difference matters both in legal and in ethico-political terms. This is a controversial perspective in what is considered "Holocaust Studies", but it is, in my opinion, a distinction to be aware of.
Second, the notion of "evil" that you impute to AI is rather imprecise. It is so because it is likely based on an imaginary and speculative notion of AI. Perhaps a more productive way to approach this problem would be to look through the lens of what Gernot Böhme refers to as "invasive technification". There is a lot of work that is being done on the ethical issues surrounding this notion of progressive technification given some of the problems that are arising as a consequence of this emergent and evolving process. The Robodebt problem is a classic example. As Prof. van den Hengen (quoted in the article) points out
>Automation of some administrative social security functions is a very good idea, and inevitable. The problem with Robodebt was the policy, not the technology. The technology did what it was asked very effectively. The problem is that it was asked to do something daft.
This is, generally speaking, also true about most other computerized systems including the "AI systems" that are driving military and combat systems.
Thus, I'd argue that the ethico-moral concern needs to be targeted towards the designers of the systems, the users of the system and only secondarily to the technologies involved. Some, of course, disagree with this. They contend that we should be looking to (and here they slip into a kind of speculative and futuristic mode) design "artificial moral machines", that is to say, machines that are intrinsically capable of engaging in moral behaviour. This is a longer and more detailed treatment of the subject of "moral machines". I have serious reservations about this, but that is irrelevant in this context.
In conclusion, I would like to say that while I am empathetic to your personal situation, but the article that you have shared, while appreciated, is not really on the mark. This kind of a discussion requires a more nuanced and carefully thought out approach, and an awareness of the work that has been done and which is being done in the field currently.
SanctusSalieri t1_j26d65v wrote
Reply to comment by Cruciblelfg123 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Well, historians get to decide these things and having been one, they would all disagree.
Cruciblelfg123 t1_j26amkh wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I don’t see how any of those are particularly unique fads as far as humanity goes, again we aren’t a stable bunch. It is definitely a modern evolutionary of pretty old supremacist ideas, but supremacy is nothing new and all those are just excuses for it that nazis used to their advantage. Any new idea is still just viewed through the same limited lens any one of us short lived predictable assholes can see it through
Weird_Sentence_789 t1_j26a2sa wrote
If evil vanished completely, might goodness continue to be good?
For example, do the good ones come in a ranking within themselves? Thus will the good -at the bottom- be bad compared to the others?
Must there be evil for there to be goodness? Or, is there no evil when there is only good?
tmac213 t1_j265p0f wrote
Reply to comment by Wild-Bedroom-57011 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
In the current world I think the answer is clearly yes, although I would amend the word "evil" to something like destructive or harmful, since an economic system doesn't need to contain malicious intent on order to harm people. Feudalism was and is awful, but has largely given way to capitalism in most places in the modern world. As for slavery, capitalism has been the primary consumer of slave labor since the industrial revolution. So yes, capitalism is the worst.
YuGiOhippie t1_j26maaf wrote
Reply to comment by smariroach in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
Cool now love doesn’t exist because it can’t be defined outside of your presupposed deterministic world view.
Nihilism as I said.