Recent comments in /f/philosophy

Whatmeworry4 t1_j24k3s5 wrote

I would disagree with her definition because I believe that the banality of evil is what happens when we do understand the full consequences of our actions, and just don’t care enough to change them.

Evil is not a cognitive error unless we are defining it as mental illness or defect. To me, true evil requires intent.

18

threedecisions t1_j24k2q6 wrote

I've heard that there are dogs that will eat until they die from their stomach exploding if they are supplied with a limitless amount of food. People are not so different.

It's not so much that these billionaires are necessarily especially evil as individuals, but their power and limitless greed leads them to evil outcomes. Like, Eichmann's portrayal in the article, he was just doing his job without regard for the moral consequences.

Though when you hear about Facebook's role in the Rohingya genocide in Myanmar, it does seem as though Mark Zuckerberg is a psychopath. He seems to have little regard for the lives of people affected by his product.

3

pokoponcho t1_j24fnrl wrote

Please, help me to understand your position. Can you explain the difference between libertarian free will and what you understand under a free will?

Britannica seems to use a libertarian approach to define free will in general: "free will, in philosophy and science, the supposed power or capacity of humans to make decisions or perform actions independently of any prior event or state of the universe."

2

bildramer t1_j24f6fo wrote

That's a somewhat disappointing article. Among other things, the man in the Chinese room is not analogous to the AI itself, he's analogous to some mechanical component of it. Let's write something better.

First, let's distinguish "AI ethics" (making sure AI talks like WEIRD neoliberals and recommends things to their tastes) and "AI notkilleveryoneism" (figuring out how to make a generally intelligent agent that doesn't kill everyone by accident). I'll focus on the second.

To briefly discuss what not killing everyone entails: Even without concerns about superintelligence (which I consider solid), strong optimization for a goal that appears good can be evil. Say you're a newly minted AI, part of a big strawberry company, and your task is to sell strawberries. Instead of any complicated set of goals, you have to maximize a number.

One way to achieve that is to genetically engineer better strawberries, improve the efficiency of strawberry farms, discover more about people's demand for strawberries and cater to it, improve strawberry market efficiency and liquidity, improve marketing, etc. etc. One easier way to achieve that is to spread plant diseases in banana, raspberry, orange, peach farms/planatations. Or your strawberry competitors, but that's more risky. You don't have to be a superhuman genius to generate such a plan, or subdivide it into smaller steps, and ChatGPT can in all likelihood already do it if prompted right. You need others to perform some steps, but that's most large-scale corporate plans.

An AI that can create such a plan can probably also realize that it's illegal, but does it care? It only wants more strawberries. If it cares about the police discovering the crimes, because that lowers the expected number of strawberries made, it can just add stealth to the plan. And if it cares about its corporate boss discovering the crimes, that's solvable with even more stealth. You begin to see the problem, I hope. If you get a smarter-than-you AI and it delivers a plan and you don't quite understand everything it planned but it doesn't appear illegal, how sure are you that it didn't order a subcontractor to genetically engineer the strawberries to be addictive in step 145?

Anyway, that concern generalizes up to the point where all humans are dead and we're not quite sure why. Maybe human civilization as it is today could develop pesticides that stop the strawberry-kudzu hybrid from eating the Amazon within 20 years, and that would decrease strawberry sales. Can we stop this from happening? Most potential solutions to prevent it from happening don't actually work upon closer examination. E.g. "don't optimize the expectation of a number, optimize reaching the 90% quantile of it" adds a bit of robustness, but does not stop subgoals like "stop humans from interfering" or "stop humans from realizing they asked the wrong thing", even if the AI fully understands they would have wanted something else, and why and how the error was made.

So, optimizing for something good, doing your job, something that seems banal to us, can lead to great evil. You have to consider intelligence separate from "wisdom", and take care when writing down goals. Usually your goals get parsed and implemented by other humans, which fully understand that we have multiple goals, and "I want a fast car" is balanced against "I don't want my car to be fueled by hydrazine" and "I want my internal organs to remain unliquefied". AIs may understand but not care.

17

ting_bu_dong t1_j24djqb wrote

They said that people like him exist. The large majority of people like him are ruling no one, obviously.

Whether any current leaders/rulers/however-you-want-to-call-them are like Pol Pot is debatable... But, no, not so much.

4

oramirite t1_j24cgja wrote

Your intuition? You kinda just sound like you're being holier than thou. If you have researched AI, and know what it can do, then anyone else is capable of that as well. I don't know what trade you're in but it's not hard to do the research and understand the ethical risks of AI in our society and how it will launder existing societal biases deeper into our culture.

1

oramirite t1_j24c4i8 wrote

Your first sentence describes morality. The philosophical discussion of wether we should hurt other people is the moral dilemma. Wether you want to use other words or not, that's what it is.

You seem to just be struggling with what you define as moral just like every other human being.

2

cmustewart t1_j24bxuf wrote

I feel like either you or I missed the point of the article, and I'm not sure which. I didn't get any sense of "what if ai takes over". My account is that the author thinks that "ai" systems should have some sort of consequentialism built in, or considered in the goal setting parameters.

The bit that resonates with me is that highly intelligent systems are likely to cause negative unintended consequences if we don't build this in up front. Even for those with the most noble intentions.

45

oramirite t1_j24bvnc wrote

So if I removed you from the earth because I perceive your amoral behavior as disruptive to the moral system, you'd clearly have no problem with that. Do you just act in your own self interest all the time? Do you have any relationships? Do you believe in treating other humans with respect?

1

oramirite t1_j24bg3n wrote

The risk, when it comes to AI, is it's linkage to these people. AI is a very dangerous arm of systemic racism, systemic sexism, white supremacy and . It's just a system for laundering the terrible biases we already exhibit into our daily lives even more under the guide of being unbiased. We can't ignore the problems AI will bring because it's an escalation of what we've already been dealing with.

4

cmustewart t1_j249qd2 wrote

Who doesn't understand this already? Given the incredible depth of human ignorance, I'd imagine a fair amount of corporate tech hierarchy hasn't given it a single thought. My intuition is that the vast majority of humans have a view of AI driven by cultural depiction, rather than by experience or education.

4

Jingle-man t1_j249jeo wrote

>I’m just not sure that it made sense when you said that it’s possible that there would’ve been nothing, and that makes it beautiful

It didn't make sense at all, because language can't really capture this kind of thing well. But to be fair, I said "the universe might as well not have been" which isn't wrong. There's no reason for the universe to exist, but neither is there any reason for it not to exist. The universe is "unnecessary" in that its existence itself is not a matter of necessity. The universe truly "doesn't need to exist" because "need" implies necessity. But as I've said again and again, Necessity is not necessary. It is (that is, the universe is) unnecessary.

1