Recent comments in /f/philosophy

JonBonFucki t1_j5t26cs wrote

it's a question of personhood outside of humanity. Can AI be sentient? Can animals evolve enough on this planet that they become sentient? What about aliens from other planets? Sure, if we are seeing them they built spaceships and came here but are they persons? Are they sentient? What does it even mean to be a person? Those are questions important to philosophy and they can be examined looking at animals on this planet here and now as well as these pie in the sky conceptual situations.

1

bumharmony t1_j5t13ma wrote

To my understanding personhood is in simplicity the abilities that makes an agent: the ability to create and follow rules and create a conception of good. And it is the atomistic type of self. Indivisible. That is why governments, companies or states cannot be persons because they can be divided into smaller pieces and most likely parts of them are ignored in a majority rule type of situations. So therefore to me representative democracy fails already for that reason.

1

lunartree t1_j5ssmcf wrote

> its just philosobabble

To translate into a more specific argument: this debate operates solely within the realm of metaphysics which is completely detached from the real world material issues tied up in the social construct of personhood. This means that debate on this matter will not reveal applicable wisdom to the real world problems that the concept of personhood relates to. This then makes the reader question what is the purpose of spending time having this conversation.

3

kgbking t1_j5sq3op wrote

>I would privilege Chinese people if I were Chinese, or females if I were female

I agree with you. Because I am an American male, I too privilege men over women and Americans over all other nations. America and men first! (face palm and /s)

>If I am what I am

Are you not ignoring how your identities are constructs?

2

Nenor t1_j5snqsc wrote

It's a foundational decision for a legal system, for sure. But, having legal personhood for companies has a lot more benefits than drawbacks.

For example, if we didn't decide to have it, then it wouldn't be possible for companies to enter intro contracts. It would have to be just be one (or more) person signing the contract in his name. The companies wouldn't be able to buy equipment, it would have to be owned by certain individuals in the company. But since these individuals cannot enter a labor contract with it, they can leave at any time, taking the equipment.

Without personhood, you would also forgo the concept of limited liability. Thus, every investor would be fully liable for all debts of the company, not just the amount they want to invest.

Without personhood, you are also unable to sue companies. You would have to sue individuals.

There are plenty of other examples, but these are the most important ones.

1

simonperry955 OP t1_j5sjxa1 wrote

It's a very good question - how do we link "everyday" goals with evolutionary ones? In a way, it doesn't matter too much for the paradigm. We survive in order to reproduce; we thrive to survive. Each one is pursued for its own sake. Thriving covers everyday goals. It's not often that we are faced with survival or reproduction problems. That works well enough.

I feel we're at or close to the stage where any aspect of morality can be theorised if not mechanised or codified.

A goal that would "destroy the moral system" is win-lose competition rather than win-win mutualism. So we see this from Mr Putin for example, a completely amoral person.

1

simonperry955 OP t1_j5sjevi wrote

I think what you're talking about is promoting long term personal well-being - skillful action. Skillful action feels good.

Good point - if morality is about others, why do it? Where's the benefit for me? Because, like you say, what works for others works for me; and we live in a closely interdependent world, where what is good for you is good for me.

The version of utilitarianism that can be derived under this evolutionary paradigm specifies benefits for the self as well as others.

1

AnticallyIlliterate t1_j5sghet wrote

On moral anti-realism

Moral anti-realism is the view that moral statements, such as “murder is wrong” or “honesty is good,” do not correspond to any objective moral facts or values that exist independently of human opinions or beliefs. This is in contrast to moral realism, which holds that moral statements do correspond to objective moral facts or values.

One of the main arguments for moral anti-realism is that there is no way to objectively verify or falsify moral claims. For example, it is not possible to conduct a scientific experiment to prove that murder is wrong or to measure the “goodness” of honesty. This contrasts with scientific claims, which can be tested and verified through experimentation and observation.

Another argument for moral anti-realism is that moral beliefs and values are culturally relative and vary widely across different societies and historical periods. This suggests that moral beliefs are not based on any objective moral facts, but rather on the cultural and historical context in which they are held.

One version of moral anti-realism is called subjectivism, which holds that moral statements express the personal opinions or feelings of the person making them. According to subjectivism, there are no objective moral facts or values, but rather, moral statements are simply expressions of the speaker’s personal views.

Another version of moral anti-realism is called relativism, which holds that moral statements are true or false relative to a particular culture or society. According to relativism, there are no objective moral facts or values that hold true across all cultures or societies.

A third version of moral anti-realism is called expressivism, which holds that moral statements are not intended to describe any moral facts or properties but instead to express the speaker’s attitudes or feelings. Expressivists believe that moral statements are not truth-apt, that is, they don’t purport to be true or false, but instead express the speaker’s moral attitudes or feelings.

Moral anti-realism has been criticized by moral realists, who argue that it fails to provide a coherent account of moral language and ethical reasoning. They argue that moral anti-realism is unable to explain how moral statements can be meaningful or have any practical implications if they do not correspond to any objective moral facts or values.

Despite these criticisms, moral anti-realism continues to be a widely debated topic in philosophy and ethics. It is an important perspective to consider when examining the nature of morality and the foundations of ethical reasoning.

1

BernardJOrtcutt t1_j5s9127 wrote

Your comment was removed for violating the following rule:

>Be Respectful

>Comments which consist of personal attacks will be removed. Users with a history of such comments may be banned. Slurs, racism, and bigotry are absolutely not permitted.

Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1

tkuiper t1_j5rxzqf wrote

This is very interesting, and I confess I haven't fully read the details yet but.

-I'm curious what thoughts would be on trying to codify and apply the theories as a sort of science grounded morality.

-A detail I feel is very important within this concept is that while summarily in evolution the environment and therefore the moral structures can change, humans alive now do have a somewhat fixed inherited moral code. Ie. The cardinal directions don't spin for an individual.

-Itd be interesting to take this sort of evolutionary/biology perspective to what our actual goals are rooted in. Especially because they operate somewhat independently of the moral structure, it could be a fascinating exercise for understanding psychology or predicting non-human value structures.

-I'd also speculate if goals or some more basic elements of goals would be inhereted like morals. If any goal can be included so long as it doesn't directly focus on destroying the moral system.

1