Recent comments in /f/singularity

greatdrams23 t1_jdomo5u wrote

"These arguments fall under one fundamental flaw. You cannot disprove their claims because their claims have no evidence in the first place."

That is a seriously flawed argument. If a person states that singularity is close (or AGI or anything) it is up to them to prive it.

In fact, your have it completely the wrong way around. I cannot disprove a claim that singularity is close, because the claim has no evidence in the first place.

1

flexaplext t1_jdomkfe wrote

^ This.

The anthropic principle is massive. If there were an advanced AI that spread throughout the universe, then we wouldn't be here. It would have already taken over the planet. If the dinosaurs hadn't been made extinct, then we wouldn't be here. If the speed of light was different, then we wouldn't be here.

The anthropic principle applies to absolutely everything. It is not actually strange at all how things are; because it is entirely necessary for things to be how they are in order for us to be here to perceive them.

What would actually be strange, imo, is if this is a game or a simulation. Because why the fuck make a game or simulation like this? What would be the point?

A simulation is necessarily created by an intelligent being for a purpose. We are one example of this. Our games don't look like this universe. Our universe appears to be completely devoid of point, of meaning, or purpose. There is no objective or anything. It makes no sense to me why an intelligent source would simulate this. If we had the ability to create such a thing, then I doubt we certainly wouldn't make it anything like this world or universe. We would simulate things far more interesting, I imagine.

9

Neurogence OP t1_jdolina wrote

I've been reading his writings and books for over a decade. He is extremely passionate about AGI and the singularity. His concern is that by focusing too heavily on LLMs, the AI community might inadvertently limit the exploration of alternative paths to AGI. He wants a more diversified approach, where developers actively explore a range of AI methodologies and frameworks, instead of putting all their eggs into the LLM basket, to guarantee that we can be successful in creating AGI that can take humanity to the great above and beyond.

40

wildgurularry t1_jdoi5y1 wrote

Something to think about is how much the anthropic principle plays into things. Here is how I think about it: assume everything that happened to Earth was required for intelligent life to develop.

So, you need a planet with a large moon. That can only happen if a planet that was just the right size hit us during formation. That's pretty rare.

Assume you need a fairly significant axial tilt. Assume you need a rotation speed that gives a day length that is not too long and not too short.

Assume you need just enough water to cover more than half the planet, but not so much to cover the whole planet.

Assume you need a large Jupiter-like planet to shepherd asteroids.

Assume you need periodic large asteroid impacts every few hundred million years. Not too big, just enough to "reset" life to give other species a chance to develop.

There are probably a bunch that I'm forgetting about... But I think intelligent life is harder to develop than just needing water and oxygen on a planet.

32

daronjay t1_jdohqsc wrote

While some of your facts are oversimplified, there is a core of truth in your argument.

The current apparent state of the universe is implausible.

It’s a lot to swallow, and it’s hard to see why most of the universe hasn’t already been turned into computainum by older lifeforms and their child AIs.

Unless it actually has…

59

0002millertime t1_jdog9af wrote

He is kind of right, in a sense. If two objects are perfectly entangled, then you can "force" the unobserved object into a certain state by observing (interacting with) only its entangled partner. Of course you aren't really forcing anything.

My favorite interpretation is the Many World Interpretation, and it's more about you yourself also becoming entangled with a certain part of the universal wave function, and that limits what you can observe, due to decoherence. The Copenhagen Interpretation gives the same results, but then you have to assume instantaneous wave function collapse affecting both entangled partners, rather than you just finding yourself in a subsection of the larger wave function, that never collapses.

6

94746382926 t1_jdo7f9x wrote

They already hire plenty of minorities, the comments about them only hiring white people was from the 1880's so it's kind of a stupid critique. I mean go to their website or lookup their TV ads. It's already plenty diverse (probably more diverse than the actual US population).

It's like saying you won't buy a Volkswagen or Hugo Boss because they used to make their shit for Nazis.

7

cant-say-less-info t1_jdo6a9t wrote

Don't get me wrong. I love many Hollywood movies.

However, I hate the ones with the same old script where the protagonist is shown as a complete loser in the beginning, slaving away, being abused, then something magical/extraordinary happens and they completely change their lives, they become alpha and a winner and finally get to kiss the girl of his dreams and defeat the bad guy with the power of love.

3