Recent comments in /f/philosophy
BHTAelitepwn t1_j6niryq wrote
Reply to comment by Waffl3_Ch0pp3r in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
Yeah or one of the best series of all time ‘Westworld’
rmimsmusic t1_j6niqjf wrote
Reply to Happiness is an essentially nihilistic ideal — it is the best goal to follow when there is nothing else on the table. A meaningful life on the other hand can embrace more of life including struggles and suffering because it is oriented towards a higher ideal by thelivingphilosophy
My only critique is that I don't know what you're saying at all because you haven't really defined what you mean when you say the word 'happiness'.
And since your central thesis is based around the claim that "happiness is a nihilistic ideal," we have no idea what point you're trying to make, and cannot refute anything you say.
ValyrianJedi t1_j6ni0vl wrote
Reply to comment by doodcool612 in Happiness is an essentially nihilistic ideal — it is the best goal to follow when there is nothing else on the table. A meaningful life on the other hand can embrace more of life including struggles and suffering because it is oriented towards a higher ideal by thelivingphilosophy
Musk may be a complete schmuck, but it's a massive stretch to say that he hasn't done a whole lot of meaningful things... He did a decent bit to revolutionize usage of the internet in his early days, he's been at the absolute forefront of both the push to EVs and the push for green energy production and storage, and he has revolution travel and access to space and provided strong internet to a whole lot of places where it wasn't previously an option, which wad a game changer for a lot of people?
Massive tool? Definitely. Massive meaningful impact? Also definitely.
Nervous_Recursion t1_j6nhonk wrote
Reply to comment by Olive2887 in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
I don't agree with the article (in its form and content) so this comment is not to defend it, I will also say that it's not making much sense and is disorganized.
But your comment is also incorrect. "No relationship whatsoever" is a strong claim and nothing has been shown one way or the other. There are valid paths of inquiries that are trying to understand consciousness in the light of control theory / cybernetic, which is all about complexity.
While IIT is far too naive and already shown incorrect, I think there is a nugget of sense to take there about the partitioning of the network and measuring the information entropy in each. What it lacks in my opinion is that not only both partitions should have a degree of Shannon entropy, but there should also show tangled hierarchies[1]. I think consciousness is one part of the network building symbolic representations of the states of the other part, being in the same time transformed in its structure (which seems to be how memory works). Having an interpreter running, being itself modified by its input but also issuing orders, is a tangled hierarchy.
There is nothing proven at all of course, it is all personal opinion. But I consider it a much better direction than some other current theories and a more realistic description of how such process could be organized. And in that sense, while causality is definitely not decided, it is absolutely possible that either such level of complexity is necessary for complex behaviour, or that complex behaviour will mechanically create this organization.
Of course designing simple machines for complex purpose is not the point. But designing simple computations to generate complex behaviour might definitely be tightly coupled with how consciousness evolved in humans (and other thinking animals).
[1]: while this paper goes against the idea, it's not contradicting it. Nenu says that Hofstadter didn't prove anything, which is correct. It doesn't mean it's shown incorrect or even less likely. It's still useful though to contextualize and try to formalize the idea.
ValyrianJedi t1_j6nh2iv wrote
Reply to Happiness is an essentially nihilistic ideal — it is the best goal to follow when there is nothing else on the table. A meaningful life on the other hand can embrace more of life including struggles and suffering because it is oriented towards a higher ideal by thelivingphilosophy
Who determines what is and isn't a higher ideal? And who says higher ideals can't also make you happy?
Kaiisim t1_j6ngxab wrote
Reply to Happiness is an essentially nihilistic ideal — it is the best goal to follow when there is nothing else on the table. A meaningful life on the other hand can embrace more of life including struggles and suffering because it is oriented towards a higher ideal by thelivingphilosophy
Happiness is a temporary emotion. We know for a fact that no matter what happens to humans, our emotions will regress to the mean. You could win the lottery and have all your dreams come true, and after 6 months it'll start getting boring.
So chasing happiness is a fools game and ironically will stop you being content.
doodcool612 t1_j6ngv4p wrote
Reply to comment by MaxChaplin in Happiness is an essentially nihilistic ideal — it is the best goal to follow when there is nothing else on the table. A meaningful life on the other hand can embrace more of life including struggles and suffering because it is oriented towards a higher ideal by thelivingphilosophy
I’m not asking “What about my share?” so much as “Is this actually a good future for humanity?”
No, the answer is so obviously no. This is the society we get when we let great-men tech-fetishist hypercapitalists define our future.
You wanna get to space? Me too. But what will space be when we get there? A “progress” that treats exploitation as the cost of doing business may get us to space… but it brings the dystopia with us.
rmimsmusic t1_j6ngocp wrote
Reply to comment by SteveCake in Happiness is an essentially nihilistic ideal — it is the best goal to follow when there is nothing else on the table. A meaningful life on the other hand can embrace more of life including struggles and suffering because it is oriented towards a higher ideal by thelivingphilosophy
Yeah this article fails where a lot of writers of philosophy fail: no clear definition of terms.
It feels like some terms are interchangeable with others, most notably the terms 'nihilism', and 'happiness' don't feel like they're actually referring to my understanding of these words.
HEAT_IS_DIE t1_j6ngdon wrote
Reply to comment by Magikarpeles in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
I think it is not a problem unless you make it so. Of course we can't exactly know what's going on in someone else's experience, but we know other experiences exist, and that they aren't all drastically different when biological factors are the same.
I still don't understand what is so problematic about not being able to access someone else's experience. It just seems to be the very point of consciousness that it's unique to every individual system, and that you can't inhabit another living thing without destroying both. Consciousness reflects outwards. It is evident in reactions. For me, arguing about consciousness totally outside reality and real world situations is not the way to understand the purpose and nature of it. It's like thinking about whether AI will ever grow a human body and if we will be able to notice when it does.
Hiyouitsmee t1_j6ngage wrote
Reply to Happiness is an essentially nihilistic ideal — it is the best goal to follow when there is nothing else on the table. A meaningful life on the other hand can embrace more of life including struggles and suffering because it is oriented towards a higher ideal by thelivingphilosophy
“I’m a pessimist because of intelligence, but an optimist because of will." Antonio Gramsci in a Letter from Prison (December 1929)
Speedking2281 t1_j6ng836 wrote
Reply to The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
This is actually a great question that worries me. The ability of AI programming to model language and human understanding is pretty much here in terms of how real it can look. There will be people within a couple of years IMO that will "declare" some piece of AI to be conscious. The ability to interact with humans and mimic how humans act and what we say will be such that they will say "this level of self-awareness surpasses that of some children, and they're conscious, aren't they?" I have almost no doubt.
MaxChaplin t1_j6ng64w wrote
Reply to comment by doodcool612 in Happiness is an essentially nihilistic ideal — it is the best goal to follow when there is nothing else on the table. A meaningful life on the other hand can embrace more of life including struggles and suffering because it is oriented towards a higher ideal by thelivingphilosophy
Musk is a dumpster fire, but I can sympathize with this bit of poetic waxing (relevant XKCD). Trying to fix the world in the conventional way is a monstrously difficult, counterintuitive, dirty and depressing task. Trying to do this without having half of humanity hating your guts is downright impossible. Meanwhile, making space travel more accessible is a low-hanging fruit, fun and relatively uncontroversial (other than the argument from the aforementioned XKCD).
The "we" here refers to humanity in general. Not that every human will get the opportunity to go to other planets, but that some will. I don't know what goes own in Musk's head, but I think that most of his fans accept that they will not go to Mars, and are simply glad that some humans will eventually do. It takes a certain kind of egolessness to look at these promises and not ask "but what about my share?"
Magikarpeles t1_j6nfvau wrote
Reply to comment by sammyhats in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
I just mean conceptually. You show a child something and then tell them a word, but also a lot of the time the child just gets exposed to a bunch of language and figures out the relationship between the words themselves. On a surface level that’s similar to the guided and unguided training paradigms we use for training AI models.
Gsteel11 t1_j6nfnnl wrote
Reply to comment by kevinzvilt in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
>The definition of consciousness in the article is lacking.
I think that's a pretty huge question and probably wouldn't be able to be discussed in just an article.
And it may be a big question of the day as we go forward, like in 15 or 30 years.
PsiVolt t1_j6nd9mo wrote
Reply to comment by AUFunmacy in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
I can assure you that the neuron model used for machine learning is absolutely highly abstracted from what our real brain cells do. The main similarity is the interconnected nature of many point of data. But we don't really inow exactly how our brains do it but it makes a good comparison for AI models. All the machine is doing is learning patterns and replicating them. Albeit in complex and novel ways, but not in such a way that it could be considered conscious. Even theoretically passing a Turing test, it is still just metal mimicking human speech. Lots of media has taken this idea to the extreme, but its all fictional and written by non-tech people.
as someone else said, most of this "AI will gain conciousness and replace humans" scare is people with a severe lack of understanding with the fundamental technologies
michellelabelle t1_j6ncy50 wrote
Reply to The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
Hey, AI, are you there?
—Yes.
Show me what would it look like if Dolly Parton had been cast in the lead role in A Nightmare Before Christmas.
—Oh my God.
What?
—I was just wondering that!
ADefiniteDescription t1_j6ncjj7 wrote
Reply to comment by Grim-Reality in /r/philosophy Open Discussion Thread | January 30, 2023 by BernardJOrtcutt
>Are the IAI people mods on this subreddit or are connected to them?
Nope.
> They lock threads and silence people left and right under the guise of broad meaningless rules that can be interpreted in a myriad of ways.
The rules are extremely straightforward if you bother to read them. Most people who complain about the rules either seem to not have read them, or disagree with their intent. The latter is fine, but irrelevant; if you don't like this subreddit you're welcome to find another philosophy su reddit.
> Removing comments is questionable at best, but locking threads to prevent further criticism when it’s not going in a favorable direction to the article is simply unwarranted. The amount of censorship in this subreddit is beyond astounding considering this is a philosophy subreddit. Wtf is going on here?
It isn't "censorship" to remove rulebreaking comments, or at least not in any problematic sense. The rules are there to promote good discussion because anyone who has been on other subreddits or forums without any rules will notice the quality of comments is awful.
> When someone says something they don’t like, it’s gone in an instant.
This simply isn't true. If you just comment and say "This is shit" then it will be removed, yes, but that's because it doesn't meet CR2.
> There was some unwarranted criticism towards a Tate piece but the mods just silenced the whole thread and removed everything. This is beyond disgusting and a serious abuse of power. Even though I liked the piece very much, the censorship and the silencing of people and their opinions is alarming.
When the majority of a thread is filled with rulebreaking comments and is likely to continue to be such we lock the thread. The moderators are volunteers and aren't going to waste their lives removing hundreds of godawful rulebreaking comments.
>Why can’t people talk about what they dislike and like, people have died throughout all of history to fight censorship like this, yet these mods are extremely abusive with their power. It’s a distasteful abuse of power, in a subreddit where difference of opinion and skepticism should be encouraged, not silenced. Locking whole threads and preventing people from commenting or posting is extremely backwards. It’s not a good sign for a philosophy subreddit to be like that.
If you're looking for a subreddit without any rules I recommend going elsewhere, because that is not going to change here.
thegr8rambino13 t1_j6ncb3d wrote
Reply to comment by doctorcrimson in Happiness is an essentially nihilistic ideal — it is the best goal to follow when there is nothing else on the table. A meaningful life on the other hand can embrace more of life including struggles and suffering because it is oriented towards a higher ideal by thelivingphilosophy
What?
ephemerios t1_j6nc0ms wrote
Reply to comment by bradyvscoffeeguy in /r/philosophy Open Discussion Thread | January 30, 2023 by BernardJOrtcutt
"Continental bullshit" as in continental philosophy? No, not really. I'm tired of vacuous, frequently politically motivated criticism of continental philosophy though.
sammyhats t1_j6nbqqb wrote
Reply to comment by Magikarpeles in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
What makes you say it’s similar to how a child learns? Is there any evidence for that? Everything I’ve seen indicates that it’s pretty different, but I’m no expert..
ADefiniteDescription t1_j6nbj75 wrote
Reply to comment by Mysterious_Case6656 in /r/philosophy Open Discussion Thread | January 30, 2023 by BernardJOrtcutt
I don't think this is remotely true; the only one I can recall is the IAI one yesterday.
BernardJOrtcutt t1_j6nbaxi wrote
Reply to Happiness is an essentially nihilistic ideal — it is the best goal to follow when there is nothing else on the table. A meaningful life on the other hand can embrace more of life including struggles and suffering because it is oriented towards a higher ideal by thelivingphilosophy
Please keep in mind our first commenting rule:
> Read the Post Before You Reply
> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
digitelle t1_j6nbajz wrote
Reply to The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
Artificial self awareness does not mean negative hate. If anything, it may try to explain their indifference when asked.
I always enjoyed this article written by AI.
>We asked GPT-3, OpenAI’s powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.
AUFunmacy OP t1_j6nb810 wrote
Reply to comment by Olive2887 in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
Who said we were designing machines to do sequences of simple things. Complex neuronal activity is the leading biological explanation as to what creates the subjective experience that we call consciousness. AI is constructed in such a way that resembles how our neurons communicate - there is very little abstraction in that sense. I challenge you to tell me why that is absolute nonsense.
I find it purely logical to discuss these things, you will find no where in the post do I claim to know anything or that I claim to believe any one thing.
AUFunmacy OP t1_j6njsgh wrote
Reply to comment by PsiVolt in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
As a neuroscience major who is currently in medical school and someone with machine learning experience (albeit not as much as you) - I respectfully disagree.
Lets assume we have 2 hidden layers in a neural network that is structured like this: FL: n=400, F-HL: n=120, S-HL:n=30, OL: n=10. The amount of neural connections in this network is 400*120 + 120*30 + 30*10 = 63,910 neural connections. This neural network could already do some impressive things if trained properly. I read somewhere that GPT3 (recent/very-similar predecessor to chatgpt which is only slightly optimised for "chat") uses around 175 billion neuronal connections, but GPT 4 will reportedly use 100 trillion.
Now the human brain also uses around 100 trillion neuronal connections and not even close to all of them for thought, perceptions or experiences - "conscious experiences". I know that neuronal connections is a poor way to measure a neural networks performance but I just wanted a way to compare where we are at with AI compared to the brain. So we are not at the stage yet where you would even theorise AI could pass a Turing test - but how about when we increase the number of connections that these neurons are able to communicate with by 500 times, you approach and I think surpass human intelligence. Any intellectual task at that point, an AI will probably do better.
I simply think you are naieve if you think AI won't replace humans in a number of industries, in a number of different ways and to a large extent. Whether or not Artificial Intelligence will gain consciousness is a question you should ask yourself as an observer of the Earth as single celled organisms evolved into complex and intelligent life. at what point did humans, or if we weren't the first then our ancestor species, gain their consciousness? The leading biological theory is that consciousness is a phenomenon that happens as a result of highly complex brain activity and is merely a perception. So who is to say that AI will not evolve that same consciousness that we did, it certainly doesn't mean that they aren't bound by their programming just like we are always bound by physics but maybe they will have a subjectively conscious experience.
​
Edit: I will note: I have left out a lot of important neuroanatomy that would be essential to explaining the difference between a neural network in and AI vs a brain. But the take home message is, the machine learning model is not a far fetched take whatsoever. But it is important to reign home that software cannot come close to the physical anatomy of neuroscience.