Recent comments in /f/philosophy
SanctusSalieri t1_j29295w wrote
Reply to comment by sammarsmce in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
It's extremely pedantic to suggest disabled people can't be responsible for what they say and need to be handled with kid gloves. The fucking ironic thing is I'm also disabled. Does that mean you need to delete your comment and agree with everything I say? Or am I owed the dignity of being treated like anyone else arguing a position?
whodo-i-thinkiam t1_j291zzs wrote
Reply to comment by IAI_Admin in We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
What is "universal morality?"
SanctusSalieri t1_j291t1q wrote
Reply to comment by sammarsmce in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Present day Germany is not fascist. Words have meaning and if you don't know what fascism is there are books that could help you. Calling present day Germany fascist misconstrues history and the present and makes us less informed than we would be by having a proper analysis of what is going on.
Meta_Digital t1_j290zs4 wrote
Reply to comment by ShalmaneserIII in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
When wealth is consolidated, that means it moves from a lot of places and into few places. That's why the majority of the world is poor and only a very tiny portion is rich.
IAI_Admin OP t1_j290l41 wrote
Reply to We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
Human rights activist Peter Tatchell examines the tribal nature of morality, with barrister and founder of Effective Giving UK Natalie Cargill, and political theorist David Miller. The panel unpick the binaries of tribal vs. universal morality, and moral psychology vs. ethics, to put forward their understanding of where society is at the moment, and what scope there is for social progress through better employment of our moral sense.
NickDixon37 t1_j28wpod wrote
Reply to comment by pokoponcho in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
Thank you for taking my post seriously - as it's way more pedestrian than what usually counts as philosophy.
I tend to eschew most dogma, and almost all religions and formal philosophies in favor of pragmatism, as my intellectual and scientific skills are limited by my own humanity. But I also have a tendency to see right though religious and philosophical bullshit. So I don't believe in god, but I do believe in love, and beauty - and magic. And balance. Where the Serenity Prayer is a great oversimplification of the answer to the determinism debate:
>God, grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference.
It's an oversimplification - because it's impossible for us to know absolutely what we can change - and what we can't. But there's still great value in trying to discern what's possible, without worrying too much about always getting it right.
[deleted] t1_j28uzg2 wrote
Reply to comment by Capital_Net_6438 in /r/philosophy Open Discussion Thread | December 26, 2022 by BernardJOrtcutt
[removed]
Capital_Net_6438 t1_j28qpoi wrote
Reply to comment by [deleted] in /r/philosophy Open Discussion Thread | December 26, 2022 by BernardJOrtcutt
As a fan of logic, rejecting logic isn’t an option for me.
Suppose we don’t assume anything about anyone’s knowledge on day 1. And suppose, as is usually the case, that we’re considering proving a surprise quiz is impossible. Then it surely does not follow that a surprise quiz can’t happen on day 5.
The argument is supposed to go that at the end of day 4, the student knows there’ll be a quiz on day 5. But he has no idea really. We didn’t build him having knowledge into the setup at day 1 and therefore he won’t magically have knowledge at day 4. The assertion that he does have knowledge at that point is totally unsupported.
So of course if we don’t assume anything about the student’s knowledge in the setup there could be a surprise quiz on day 5. Day 5 comes; a quiz happens; and the ignorant student says - “wow, I didn’t see that coming.”
Wild-Bedroom-57011 t1_j28kq4l wrote
Reply to comment by tmac213 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
But they said "has yet to content with"
Unless you do in fact mean that every single system of governance, including things before slavery that are hard to conceptualize under one framework, extreme state control (whether you believe NK, USSR are actually socialist or not), etc. etc.
I'm not making a pro-capitalist argument, merely the point that
And ignoring the issue of slavery historically-- "has yet to content with"-- does seem a bit of a deliberate sidestep. Of course capitalism will be the primary consumer of slave labour, but slavery, absolute poverty, etc are lower and falling. Further, modern slavery is completely terrible but less severe than chattel slavery, or slavery that came before that.
But again, my argument was never that capitalism is better than anything else, merely that it isn't the most evil thing. Genocide might be. Or something completely different.
YuGiOhippie t1_j28hhfd wrote
Reply to comment by smariroach in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
You know what is great? This fruitless conversation is not determined to go on endlessly: you are free to disengage at anytime
robothistorian t1_j28b3m5 wrote
Reply to comment by AndreasRaaskov in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
>do not expect the same quality and nuance as a book or a paper written by a professor with editorial support and hidden behind a paywall.
If you are going to put something out in public with your name on it (in other words publish) and want it to be taken seriously, then it is necessary to ensure that it is carefully thought through and argued persuasively. This accounts for the "nuance and quality". References are important, but in a relatively informal (non-academic) setting, not mandatory.
Further, professors (and other less senior academics) usually only get editorial support after their work has been accepted for publication, which also means it has been through a number of rounds of peer review.
>I hope one day to get better
I am sure if you put in the effort, you will.
Heidegger1236 t1_j28az2b wrote
Reply to How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
That is why technology must be in the service of the world, not man.
AndreasRaaskov OP t1_j28acyk wrote
Reply to comment by robothistorian in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Thank you for the extra sources I will check them out. And hopefully include them in further work.
In the meantime, I hope you have some understanding of the fact that the article was written by a master's student and is freely available, thus not do not expect the same quality and nuance as a book or a paper written by a professor with editorial support and hidden behind a paywall.
I hope one day to get better
Heidegger1236 t1_j289y5m wrote
When I was young I read Nietzsche extensivly, however, for some reason later on, I never found him to be all that interesting. I like Schopenhauer more.
sammarsmce t1_j288kpp wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Any instance of fascism is fascism. Don’t start with the “some people have it worse” I really don’t like you and you need to leave them alone.
sammarsmce t1_j288hkq wrote
Reply to comment by SanctusSalieri in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I think it’s insane that you would respond to a comment by a disabled person expressing their own experience with evil and calling it insane. You need empathy and have just ironically exemplified the ethos of the original theory.
sammarsmce t1_j288cin wrote
Reply to comment by ThorDansLaCroix in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Hey honey, thank you for your well written and informative response. I am so sorry you have been oppressed by the people in your area just know you have my support and if you need anything I am a message away.
AndreasRaaskov OP t1_j28802z wrote
Reply to comment by Rhiishere in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Something that was in the original draft but I found was to emphasise more that Artificial intelligence is not like human intelligence. What AI does is it can solve a specific problem better than humans while being unable to do anything outside that specific problem.
A good example would be a pathfinder algorithm in a GPS that can find the fastest route from A to B. It is simple, widely used and performs an intelligent task way faster and sometimes better than a human.
However, my article was about how even simple systems can be dangerous if they don't have a moral code.
Take the GPS again, first of all, death by GPS is a real phenomenon that happens since the GPS doesn't evaluate how dangerous a route may be.
But even in the more mundane setting, we see GPS make ethical choices unaware of it making them. Suppose for example a GPS finds two routes to your location, one is shorter, while the other is longer but faster since it uses the highway. Here you may argue that it should take the sort road to minimise CO2 impact, we could also consider the highway to be more dangerous for the driver of the car, however taking the slow road may put pedestrians at risk. There are also some of the newest GPS that consider the overall traffic based on real-time data, those GPS sometimes face a choice where it could send some cars a longer road to avoid congestion, thus sacrificing some people's time in order to make to overall transport time shorter.
Rethious t1_j285mlf wrote
Geralt learns pretty quickly that refusing to get involved doesn’t work out. Refusing to choose means an endorsement of the status quo by inaction. To choose between a greater and lesser evil is an unfortunate fact of life. Triage, for example, is a fairly irresistible example of this. The choice must be made to allow some to die so that others may live.
Rethious t1_j285dyd wrote
Reply to comment by Impossible_Sir6196 in The Witcher and the Lesser of Two Evils by ADefiniteDescription
Of course there are a plurality of approaches to any given situation. That does not mean dilemmas are useless for examining schools of philosophy.
As well, in reality, when faced with a situation, from the myriad of options, we tend to eliminate them until we are left with a dilemma.
ConsciousInsurance67 t1_j284oji wrote
Reply to comment by Whatmeworry4 in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Thank you. Then, I see that sometimes the difference between true evil and banal evil is a social construct, "bad" behaviours are rationalised to be congruent with a good self image, ( "it was my job, I had to do it for the better" ) this happens when no universal ethics are displayed and I think we have a consensus of what are the human rights but there isnt an universal ethic for all humanity, that is a problem philosophy psychology and sociology have to solve.
LivingRepulsive6043 t1_j2846ad wrote
Reply to How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Want to hire dedicated iOS Developers? GVM Technologies is an award-winning and renowned iOS App Development Company with over 250+ iOS app developers. We provide best-in-class and cost-effective iOS app development services. We have a pool of highly educated and talented iOS developers who build the best-quality, secure and robust applications. Hire our certified and skilled iOS app developers now!
smariroach t1_j280kp4 wrote
Reply to comment by YuGiOhippie in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
You don't seem to be trying to to provide your own definitions, reasons, or elaborations, and you ignore all my questions. I'm not sure why you are here if you don't want to explore philosophy.
The59Sownd t1_j292osj wrote
Reply to We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
Rising above our tribal instincts? I feel like we were moving in that direction, now we seem to be doubling down on these instincts.