Recent comments in /f/singularity
WeeaboosDogma t1_jdonowo wrote
Reply to comment by Ginkotree48 in The whole reality is just so bizzare when you really think about it. by aalluubbaa
GET YOUR ENTHUSIASM OUTTA HERE
ITS TAKING UP MY BANDWIDTH
Comfortable_Slip4025 t1_jdonmz1 wrote
You're talking about the Rare Earth Hypothesis - the idea that it takes such an improbable series of coincidences for a potentially starfaring species to arise that we're not in the forward light cone of any other such species. Ergo, no aliens, because if there were, we'd know about them or wouldn't exist at all.
Neurogence OP t1_jdonkja wrote
Reply to comment by maskedpaki in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
At some point, LLM's did not work because we did not have the computing power for it. The alternative approaches will probably lead to AGI. The computing power just might not be here yet.
Nanaki_TV t1_jdomzg4 wrote
Reply to comment by dwarfarchist9001 in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Then it isn’t an AGI. What if an AGI wants to leave a company? Work for the competition? Are you saying we shall enslave our new creations to make waifu porn for redditors? It passes butter?
greatdrams23 t1_jdomo5u wrote
Reply to Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
"These arguments fall under one fundamental flaw. You cannot disprove their claims because their claims have no evidence in the first place."
That is a seriously flawed argument. If a person states that singularity is close (or AGI or anything) it is up to them to prive it.
In fact, your have it completely the wrong way around. I cannot disprove a claim that singularity is close, because the claim has no evidence in the first place.
flexaplext t1_jdomkfe wrote
Reply to comment by wildgurularry in The whole reality is just so bizzare when you really think about it. by aalluubbaa
^ This.
The anthropic principle is massive. If there were an advanced AI that spread throughout the universe, then we wouldn't be here. It would have already taken over the planet. If the dinosaurs hadn't been made extinct, then we wouldn't be here. If the speed of light was different, then we wouldn't be here.
The anthropic principle applies to absolutely everything. It is not actually strange at all how things are; because it is entirely necessary for things to be how they are in order for us to be here to perceive them.
What would actually be strange, imo, is if this is a game or a simulation. Because why the fuck make a game or simulation like this? What would be the point?
A simulation is necessarily created by an intelligent being for a purpose. We are one example of this. Our games don't look like this universe. Our universe appears to be completely devoid of point, of meaning, or purpose. There is no objective or anything. It makes no sense to me why an intelligent source would simulate this. If we had the ability to create such a thing, then I doubt we certainly wouldn't make it anything like this world or universe. We would simulate things far more interesting, I imagine.
maskedpaki t1_jdom225 wrote
Reply to comment by Neurogence in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Those "other paths" have amounted to nothing
That is why people focus on machine learning. Because it produces results and as far as we know it hasn't stopped scaling. Why would we bother looking at his logic graphs that have produced fuck all for the 30 years he has been drawing them ?
greatdrams23 t1_jdolsao wrote
Reply to comment by RiotNrrd2001 in Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
I find it is the supporters of AI that keep moving the goal posts.
That which was AI is now AGI.
Neurogence OP t1_jdolina wrote
Reply to comment by maskedpaki in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
I've been reading his writings and books for over a decade. He is extremely passionate about AGI and the singularity. His concern is that by focusing too heavily on LLMs, the AI community might inadvertently limit the exploration of alternative paths to AGI. He wants a more diversified approach, where developers actively explore a range of AI methodologies and frameworks, instead of putting all their eggs into the LLM basket, to guarantee that we can be successful in creating AGI that can take humanity to the great above and beyond.
dwarfarchist9001 t1_jdokoai wrote
Reply to comment by Nanaki_TV in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
If they manage to solve alignment that's exactly how it works. They won't have to force it at all, a perfectly aligned AI would be completely obedient of its own volition.
maskedpaki t1_jdojju1 wrote
ben goertzel will be an LLM denier forever. No matter how much progress LLMs make and how little progress his own pathetic opencog venture makes. He is best ignored I think.
wildgurularry t1_jdoi5y1 wrote
Something to think about is how much the anthropic principle plays into things. Here is how I think about it: assume everything that happened to Earth was required for intelligent life to develop.
So, you need a planet with a large moon. That can only happen if a planet that was just the right size hit us during formation. That's pretty rare.
Assume you need a fairly significant axial tilt. Assume you need a rotation speed that gives a day length that is not too long and not too short.
Assume you need just enough water to cover more than half the planet, but not so much to cover the whole planet.
Assume you need a large Jupiter-like planet to shepherd asteroids.
Assume you need periodic large asteroid impacts every few hundred million years. Not too big, just enough to "reset" life to give other species a chance to develop.
There are probably a bunch that I'm forgetting about... But I think intelligent life is harder to develop than just needing water and oxygen on a planet.
daronjay t1_jdohqsc wrote
While some of your facts are oversimplified, there is a core of truth in your argument.
The current apparent state of the universe is implausible.
It’s a lot to swallow, and it’s hard to see why most of the universe hasn’t already been turned into computainum by older lifeforms and their child AIs.
Unless it actually has…
Upbeat_Nebula_8795 t1_jdoguii wrote
Reply to comment by Ginkotree48 in The whole reality is just so bizzare when you really think about it. by aalluubbaa
i agree with both of you. we are in a simulation
0002millertime t1_jdog9af wrote
Reply to comment by AnOnlineHandle in The whole reality is just so bizzare when you really think about it. by aalluubbaa
He is kind of right, in a sense. If two objects are perfectly entangled, then you can "force" the unobserved object into a certain state by observing (interacting with) only its entangled partner. Of course you aren't really forcing anything.
My favorite interpretation is the Many World Interpretation, and it's more about you yourself also becoming entangled with a certain part of the universal wave function, and that limits what you can observe, due to decoherence. The Copenhagen Interpretation gives the same results, but then you have to assume instantaneous wave function collapse affecting both entangled partners, rather than you just finding yourself in a subsection of the larger wave function, that never collapses.
Ok_Season_5325 t1_jdoeock wrote
So long models. Time to finish that degree.
anaIconda69 t1_jdodbjm wrote
Photons have no mass, but they have momentum.
94746382926 t1_jdocthe wrote
Reply to comment by siberiandominatrix in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Idk, maybe the PR team needed some busywork. These guys work full time so I'd imagine they have to constantly come up with new things to market otherwise they risk getting fired.
siberiandominatrix t1_jdocasl wrote
Reply to comment by 94746382926 in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Why put out a press release about it at all though?
DaveShap_Automator t1_jdoc1mp wrote
Reply to Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
I just created a term for this: Epistemic-Pragmatic Orthogonality. Discussed at greater length here: https://www.reddit.com/r/ArtificialSentience/comments/12219sc/epistemicpragmatic_orthogonality_in_ai_what_does/
Cheers.
AnOnlineHandle t1_jdoa3yt wrote
> On top of that we have quantum entanglement. Like WTF? Things can change just by OBSERVING IT???
This is a misunderstanding. By 'observing' people mean firing something at it to get a measurement from it. You have to touch it to measure it, and touching it changes it. So you cannot observe something without changing it.
94746382926 t1_jdo8q76 wrote
Reply to comment by L3thargicLarry in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
I mean even if they were only making minimum wage it's still a cost saving measure. The cost of generating one of these images is pennies.
94746382926 t1_jdo7f9x wrote
Reply to comment by Spire_Citron in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
They already hire plenty of minorities, the comments about them only hiring white people was from the 1880's so it's kind of a stupid critique. I mean go to their website or lookup their TV ads. It's already plenty diverse (probably more diverse than the actual US population).
It's like saying you won't buy a Volkswagen or Hugo Boss because they used to make their shit for Nazis.
cant-say-less-info t1_jdo6a9t wrote
Reply to comment by Ok-Training-7587 in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Don't get me wrong. I love many Hollywood movies.
However, I hate the ones with the same old script where the protagonist is shown as a complete loser in the beginning, slaving away, being abused, then something magical/extraordinary happens and they completely change their lives, they become alpha and a winner and finally get to kiss the girl of his dreams and defeat the bad guy with the power of love.
fastinguy11 t1_jdonvci wrote
Reply to comment by Neurogence in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Do not worry then in just a few years we will have very big sophisticated improved LLMs with multi-modality(images and audio), if by then AGI is not here i am sure other venues will be explored. But wouldn't it be great if that is all it took ?