Recent comments in /f/philosophy

Kronzo888 t1_j6pamum wrote

Played through it once, loved it, made me cry, stuck with me for nearly a full month after completing, would recommend to anybody who loves psychological horrors, but I will never, ever play it again. No game has left me so detached from the real world before. It's one hell of an experience, but I'm not sure I actually enjoyed any of it. It's more of a journey that you're thankful you got through, but it sticks with you for a long time afterwards.

4

[deleted] t1_j6p8kg5 wrote

>Epicurus was NOT a pleasure maximizer. Ataraxia is NOT a state of constant, maximal pleasure. It is a state more akin to tranquility, to be achieved by moderating the appetites and practicing something not very different from Stoic virtue.

There are different interpretations of Epicurus on this point, and with good reason. Part of the issue is the paucity of surviving primary sources; we only have three letters which were (ostensibly) authored by Epicurus himself, and everything else is second-hand accounts of his philosophy, often from a hostile perspective, written centuries after Epicurus' own death. ETA: I forgot to mention, we also have one collection of maxims, knowns as The Principle Doctrines, which I think most agree is a genuinely Epicurean text, but it was probably produced by later disciples of the school. Additionally, there is another collection, known as The Vatican Sayings, however, the provenance of its points is less certain.

That being said, the interpretation which you have forwarded is what I like to refer to as the 'tranquilist' interpretation. Ironically, while it is a correction to the view the Epicurus was a mindless, debauched reveler, it is still inaccurate. A bit of an overcorrection, if you will.

Epicurus was a hedonist, in the truest sense of the term. His goal was to maximize pleasure, and to minimize pain. However, he thought that 'ataraxia' was itself the absolute maximization of pleasure. In contrast to the Stoics, virtue was only ever instrumental to Epicurus, never the goal.

"And this is why we say that pleasure is the starting point of living blessedly. For we recognize this as our first innate good, and this is our starting point for every choice and avoidance...." -Epicurus, Letter to Menoeceus

"No pleasure is a bad thing in itself. But the things which produce certain pleasures bring troubles many times greater than the pleasures." - Principle Doctrines, VIII

The key here is understanding that, for Epicurus, there was no neutral state between pleasure and pain. At any given moment, you could only experience pleasure or pain, but not both at the same time, and you must be experiencing one of them. Ataraxia was not an empty tranquility, it was a state without troubles of the mind or body, in which all desires had been fulfilled or vanquished, a sort of contentment. Think of how you feel after a really great meal, when you are just sitting there not wanting more of anything really, just enjoying your satisfaction. I think of it less as 'tranquility' and more like 'contentment'.

"The removal of all feeling of pain is the limit of the magnitude of pleasures. Wherever a pleasurable feeling is present, for as long as it is present, there is neither a feeling of pain nor a feeling of distress, nor both together." - Principle Doctrines, III

Furthermore, Epicurus' simplicity of living was not because he valued that mode as some sort of ideal, like the Cynics, but because of practical concerns. Maintaining a lavish lifestyle carries its own burdens, and it is not a sure-thing. One can always lose their wealth and station, and if you have grown too-accustomed to high-living, then you are at even greater risk; you risk losing not only your wealth, but also your joy. Likewise, if a simpler man happens into more extravagant fair, he is better situated to actually appreciate and enjoy it.

"And we believe that self-sufficiency is a great good, not in order that we might make do with few things under all circumstances, but so that if we do not have a lot we can make do with few, being genuinely convinced that those who least need extravagance enjoy it most...." - Epicurus, Letter to Menoeceus

You see, then, Epicurus was indeed a "pleasure maximizer," he just approached the issue more shrewdly than others. I hope you'll forgive my little rant, but Epicurus and Epicureanism are of special interest to me.

46

grantcas t1_j6p5sl4 wrote

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

2

thenousman OP t1_j6p5h8g wrote

I second this though I think it’s important to highlight that the level of analysis of Huemer’s post is appropriate for a blogpost. He gets carried away but if his aim with his blogposts is to provoke philosophical reflection then I think he has succeeded. I rarely agree with him, but he makes me think a lot better which I why I continue to read his blog.

0

YourUziWeighsTwoTons t1_j6p2odx wrote

Right. Epicurus makes a distinction between the different kinds of pleasures to be sought after that serves a similar function as the distinction the Stoics make between things which are in our power to control, and things that are outside of our control. Both schools of thought recognize that human beings become vulnerable to the experience of harm when we focus ourselves on matters that are not natural to us. And so, a Stoic and an Epicurean would both be quite disciplined in how they approached life.

They would definitely give very different accounts of what made their lives "good," but I bet unless you asked them to give you their reasons, you probably wouldn't be able to easily tell one apart from the other.

The Epicurean would likely be a little more of a recluse, whereas a Stoic might be more inclined to be "in the world" and interacting with the community at large, which she believed she had a duty to participate in. I don't think the Epicurean would feel the same way, and might be more likely to live "off the grid" as it were. The Stoic's "Off the Grid" would be her Inner Citadel.

4

SuspiciousRelation43 t1_j6p0dq0 wrote

As other have pointed out, this article attacks “happiness” in the title, then proceeds to argue against what should have been called from the beginning “mere pleasure”. “Happiness” is usually understood as a satisfied or serene pleasure, not necessarily a passionate or appetital one. “Happiness” is not the opposite of “pain”; it is the opposite of misery. I don’t think the authors are arguing that we must be miserable simply to pursue some ideal, so their argument ends up being simply “We should orient our lives around a higher ideal rather than mere pleasure”. And the obvious answer is that doing so will make us more “happy” according to its usual definition.

6

dmarchall491 t1_j6p00sx wrote

> Or perhaps we overestimate what exactly consciousness is?

Certainly, however that's not the issue here. The problem with language model is simply that it completely lacks many fundamental aspects of consciousness, like being aware of its environment, having memory and stuff like that.

The language model is a static bit of code that gets some text as input and produces some output. That's all it does. It can't remember past conversations. It can't learn. It will produce the same output for the same input all the time.

That doesn't mean that it couldn't be extended to have something we might call consciousness, but as is, there are just way to many import bits missing.

13

ExceptEuropa1 t1_j6ozqbc wrote

Rebuttals? You're mistaken, my friend. I simply pointed out that your statement was unfair.

Now, your response was again self-congratulatory. I have completed superior degrees than yours, but I haven't yet dropped them here. Look, if it's true that you knew that AI has different approaches, then you simply misspoke. You said something wrong. Period. Own it up and don't get all offended. Gee...

What the hell are you talking about when you say something about evidence or explanation? I corrected you. What else did you want? A book reference? Any book on AI will show how incorrect your statement was. Open one, in a random page, and will you see.

1

Schopenschluter t1_j6ozacy wrote

I totally agree about middling and “dim” states of consciousness but I don’t agree that experience or consciousness takes place at the lowest limit of the scale, where there would be zero temporality or awareness thereof.

In this sense, I think of the “scale” of consciousness more like a dimmable light switch: you can bring it very very close to the bottom and still have some light, but when you finally push it all the way down, the light goes out.

Are computers aware (however dimly) of their processing happening in time, or does it just happen? That, to me, is the fundamental question.

1

Tw3ntycharact3rsh3r3 t1_j6ox0r3 wrote

That's an utter shit. You are the cumulative functions of your emotions and senses, and you must yearn for the happiness in the miserable and shitty 80 years you will live. It's essentially a game of strategy where you will want to have as much happiness without suffering. That's where idiotic and worthless arguments like this arose; when people make foolish choices to be happy and end up worse, they accuse the ambition of happiness for their own blindness. The ultimate focus of one's life shouldn't be a higher meaning; instead, one must be smart enough understand what makes them happy, and move towards that happiness while avoiding suffering if it doesn't serve happiness. Then one will see that the so-called "higher meaning" isn't something one should strive and work towards for, but something that they reach when they start to think and live for what makes them happy. It's not about the damn destination, it's about the journey.

2

SuspiciousRelation43 t1_j6owoo0 wrote

I myself had a strawmanned idea of what Epicurus was, then quickly realised that he is pretty much a moderate Stoic. Not completely ascetic, but still recognising that pleasure must be disciplined not only for objective well-being, but for the very ability to experience pleasure itself.

7

AUFunmacy OP t1_j6otxi7 wrote

Yes, as a programmer who has experience in machine learning I know there are different approaches, however, ChatGPT uses a parameterised, deep-learning (neural network) approach. And it certainly closely imitates how central nervous system neurons communicate, in the brain specifically (I’m in med school as a neuroscience major). That isn’t to say just because AI imitates human neuronal activity - that they have the same properties, because they don’t.

We should discuss instead of you creating vague rebuttals that provide 0 evidence and 0 explanation.

−1

tkuiper t1_j6otjpd wrote

But I would also say we experience middling states between dreamless and fully conscious. Within dreams, partial lucidity, or heavy inebriation all have fragmented/shortened/discontinuous senses of time. In those states my consciousness is definitely less complete, but still present. Unconsciousness represents the lower limit of the scale, but is not conceptually separate from the scale.

What I derive from this is that anything can be considered conscious, so the magnitude is what we really need to consider. AI is already conscious, but so are ants. We don't give much weight to the consciousness of ants because it's a very dim level. A conscious like a computer for example, has no sense of displeasure at all. It's conscious but not in a way that invites moral concern, which I think is what we're getting at. When do we need to extend moral considerations to AI. If we keep AI emotionally inert, we don't need to regardless of how intelligent it becomes. We also will have a hard time grasping its values, which is an entirely different type of hazard.

2

terminal_object t1_j6oss5h wrote

It can kind of make sense because so many people interpret it that way. But the article is too vague, I much prefer the writings of people he quotes, like Frankl

1