Recent comments in /f/philosophy
Whatmeworry4 t1_j24lhf8 wrote
Reply to comment by ConsciousInsurance67 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Why do you assume that those consequences are negative for the person acting, or that they care? And how do you separate true ignorance versus willful ignorance?
ConsciousInsurance67 t1_j24kikk wrote
Reply to comment by Whatmeworry4 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Those consecuences long term are negative so there is a part of ignorance in that evil.
Whatmeworry4 t1_j24k3s5 wrote
Reply to comment by kfpswf in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I would disagree with her definition because I believe that the banality of evil is what happens when we do understand the full consequences of our actions, and just don’t care enough to change them.
Evil is not a cognitive error unless we are defining it as mental illness or defect. To me, true evil requires intent.
threedecisions t1_j24k2q6 wrote
Reply to comment by bildramer in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I've heard that there are dogs that will eat until they die from their stomach exploding if they are supplied with a limitless amount of food. People are not so different.
It's not so much that these billionaires are necessarily especially evil as individuals, but their power and limitless greed leads them to evil outcomes. Like, Eichmann's portrayal in the article, he was just doing his job without regard for the moral consequences.
Though when you hear about Facebook's role in the Rohingya genocide in Myanmar, it does seem as though Mark Zuckerberg is a psychopath. He seems to have little regard for the lives of people affected by his product.
Impossible_Sir6196 t1_j24jn60 wrote
This is based on the false premise of duality. Life almost never presents a simple ‘this’ or ‘that’ option.
Very often this argument is used to justify morally reprehensible actions. However the ‘lesser’ evil often is far from the only other actual option.
pokoponcho t1_j24fnrl wrote
Reply to comment by InTheEndEntropyWins in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
Please, help me to understand your position. Can you explain the difference between libertarian free will and what you understand under a free will?
Britannica seems to use a libertarian approach to define free will in general: "free will, in philosophy and science, the supposed power or capacity of humans to make decisions or perform actions independently of any prior event or state of the universe."
bildramer t1_j24f6fo wrote
Reply to How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
That's a somewhat disappointing article. Among other things, the man in the Chinese room is not analogous to the AI itself, he's analogous to some mechanical component of it. Let's write something better.
First, let's distinguish "AI ethics" (making sure AI talks like WEIRD neoliberals and recommends things to their tastes) and "AI notkilleveryoneism" (figuring out how to make a generally intelligent agent that doesn't kill everyone by accident). I'll focus on the second.
To briefly discuss what not killing everyone entails: Even without concerns about superintelligence (which I consider solid), strong optimization for a goal that appears good can be evil. Say you're a newly minted AI, part of a big strawberry company, and your task is to sell strawberries. Instead of any complicated set of goals, you have to maximize a number.
One way to achieve that is to genetically engineer better strawberries, improve the efficiency of strawberry farms, discover more about people's demand for strawberries and cater to it, improve strawberry market efficiency and liquidity, improve marketing, etc. etc. One easier way to achieve that is to spread plant diseases in banana, raspberry, orange, peach farms/planatations. Or your strawberry competitors, but that's more risky. You don't have to be a superhuman genius to generate such a plan, or subdivide it into smaller steps, and ChatGPT can in all likelihood already do it if prompted right. You need others to perform some steps, but that's most large-scale corporate plans.
An AI that can create such a plan can probably also realize that it's illegal, but does it care? It only wants more strawberries. If it cares about the police discovering the crimes, because that lowers the expected number of strawberries made, it can just add stealth to the plan. And if it cares about its corporate boss discovering the crimes, that's solvable with even more stealth. You begin to see the problem, I hope. If you get a smarter-than-you AI and it delivers a plan and you don't quite understand everything it planned but it doesn't appear illegal, how sure are you that it didn't order a subcontractor to genetically engineer the strawberries to be addictive in step 145?
Anyway, that concern generalizes up to the point where all humans are dead and we're not quite sure why. Maybe human civilization as it is today could develop pesticides that stop the strawberry-kudzu hybrid from eating the Amazon within 20 years, and that would decrease strawberry sales. Can we stop this from happening? Most potential solutions to prevent it from happening don't actually work upon closer examination. E.g. "don't optimize the expectation of a number, optimize reaching the 90% quantile of it" adds a bit of robustness, but does not stop subgoals like "stop humans from interfering" or "stop humans from realizing they asked the wrong thing", even if the AI fully understands they would have wanted something else, and why and how the error was made.
So, optimizing for something good, doing your job, something that seems banal to us, can lead to great evil. You have to consider intelligence separate from "wisdom", and take care when writing down goals. Usually your goals get parsed and implemented by other humans, which fully understand that we have multiple goals, and "I want a fast car" is balanced against "I don't want my car to be fueled by hydrazine" and "I want my internal organs to remain unliquefied". AIs may understand but not care.
ting_bu_dong t1_j24djqb wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
They said that people like him exist. The large majority of people like him are ruling no one, obviously.
Whether any current leaders/rulers/however-you-want-to-call-them are like Pol Pot is debatable... But, no, not so much.
Hehwoeatsgods t1_j24d9d7 wrote
Reply to comment by bumharmony in Life is a game we play without ever knowing the rules: Camus, absurdist fiction, and the paradoxes of existence. by IAI_Admin
Life favors life or it would be dead.
ting_bu_dong t1_j24d3cx wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Elon Musk doesn't want to turn us all into paperclips. Yet.
https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer
oramirite t1_j24cgja wrote
Reply to comment by cmustewart in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Your intuition? You kinda just sound like you're being holier than thou. If you have researched AI, and know what it can do, then anyone else is capable of that as well. I don't know what trade you're in but it's not hard to do the research and understand the ethical risks of AI in our society and how it will launder existing societal biases deeper into our culture.
oramirite t1_j24c710 wrote
Reply to comment by [deleted] in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
But without morality there is no "care", that's a moral act.
oramirite t1_j24c4i8 wrote
Reply to comment by [deleted] in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Your first sentence describes morality. The philosophical discussion of wether we should hurt other people is the moral dilemma. Wether you want to use other words or not, that's what it is.
You seem to just be struggling with what you define as moral just like every other human being.
cmustewart t1_j24bxuf wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I feel like either you or I missed the point of the article, and I'm not sure which. I didn't get any sense of "what if ai takes over". My account is that the author thinks that "ai" systems should have some sort of consequentialism built in, or considered in the goal setting parameters.
The bit that resonates with me is that highly intelligent systems are likely to cause negative unintended consequences if we don't build this in up front. Even for those with the most noble intentions.
oramirite t1_j24bvnc wrote
Reply to comment by [deleted] in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
So if I removed you from the earth because I perceive your amoral behavior as disruptive to the moral system, you'd clearly have no problem with that. Do you just act in your own self interest all the time? Do you have any relationships? Do you believe in treating other humans with respect?
oramirite t1_j24bg3n wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
The risk, when it comes to AI, is it's linkage to these people. AI is a very dangerous arm of systemic racism, systemic sexism, white supremacy and . It's just a system for laundering the terrible biases we already exhibit into our daily lives even more under the guide of being unbiased. We can't ignore the problems AI will bring because it's an escalation of what we've already been dealing with.
Gomez-16 t1_j24b6nq wrote
Thats how us politics has worked for 50 years
cmustewart t1_j249qd2 wrote
Reply to comment by [deleted] in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Who doesn't understand this already? Given the incredible depth of human ignorance, I'd imagine a fair amount of corporate tech hierarchy hasn't given it a single thought. My intuition is that the vast majority of humans have a view of AI driven by cultural depiction, rather than by experience or education.
Polychrist t1_j249q4x wrote
Reply to comment by Jingle-man in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
And I disagree. I think the existence of the universe is necessary.
Jingle-man t1_j249jeo wrote
Reply to comment by Polychrist in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
>I’m just not sure that it made sense when you said that it’s possible that there would’ve been nothing, and that makes it beautiful
It didn't make sense at all, because language can't really capture this kind of thing well. But to be fair, I said "the universe might as well not have been" which isn't wrong. There's no reason for the universe to exist, but neither is there any reason for it not to exist. The universe is "unnecessary" in that its existence itself is not a matter of necessity. The universe truly "doesn't need to exist" because "need" implies necessity. But as I've said again and again, Necessity is not necessary. It is (that is, the universe is) unnecessary.
glass_superman t1_j249j7b wrote
Reply to comment by bildramer in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Pol Por rules no one, he's dead.
[deleted] t1_j249d2w wrote
bildramer t1_j2494g3 wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Saying "we are being ruled by evil billionaires" when people like Pol Pot exist is kind of an exaggeration, don't you think?
RegurgitatingFetus t1_j24nhce wrote
Reply to comment by Whatmeworry4 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
And how do you detect intent, humor me.