Recent comments in /f/singularity

turnip_burrito t1_jdozhgd wrote

I think it's a crime to make an AI that is ambivalent toward humans, because of the consequential harm comes to humanity as a result.

I believe it should be benevolent and helpful toward humans as a bias, and work together with humans to seek better moralities.

15

maskedpaki t1_jdoyj5e wrote

I've been hearing the Neuro-symbolic cheerleading for 5 years now. I remember Yoshua bengio once debating against it and seeming dogmatic about his belief in pure learning and in how neurosymbolic systems wont solve all the limitations that deep learning has. I have yet to see any results and don't expect to see any. My guess is that transformers continue to scale for 5 more years at least and we will stop asking questions then about what paradigm shift needs to take place because it will be obvious that the current paradigm will do just fine.

5

KidKilobyte t1_jdovp2k wrote

Depends on the situation and the distance of the scene. In Gone With The Wind one of the huge injured battlefield scenes they had like 2 or 3 dummies per live person, and that person would secretly pull a couple of ropes to create movement in the dummies next to them. Seen from a distance it all looked quite real.

1

AsheyDS t1_jdov1ik wrote

Symbolic failed because it was difficult for people to come up with the theory of mind first and lay down the formats and the functions and the rules to create the base knowledge and logic. And from what was created (which did have a lot of use, so I wouldn't say it amounted to nothing) they couldn't find a way to make it scale, and so it couldn't learn much or independently. On top of that, they were probably limited by hardware too. Researchers focus on ML because it's comparatively 'easy' and because it has produced results that so far can scale. What I suspect they'll try doing with LLMs is learning how they work and building structure into them after the fact, and finding that their performance has degraded or can't be improved significantly. In my opinion, neurosymbolic will be the ideal way forward to achieve AGI and ASI, especially for safety reasons, and will take the best of both symbolic and ML, and together helping with the drawbacks to both.

4

0002millertime t1_jdotpcq wrote

That was one idea, called "Hidden Variables". However, there is a statistical test for this, now called "Bell's Inequalities". This test has been performed, and definitively showed that hidden variables do not exist. It's actually pretty fascinating stuff, and I'd encourage anyone to read more about it all.

https://en.m.wikipedia.org/wiki/Bell%27s_theorem

5

plateauphase t1_jdotnsz wrote

my friend, last year's nobel in physics was awarded for experiments that ruled out local realism [1], [2], [3], [4]. current best scientific understanding indicates that physical properties don't exist before measurement, ie. physicality doesn't have standalone existence.

anton zeilinger: "there is no sense in assuming that what we do not measure about a system has reality."

8

dwarfarchist9001 t1_jdoojsi wrote

>Then it isn’t an AGI.

Orthogonality Thesis, there is no inherent connection between intelligence and terminal goals. You can have a 70 IQ human who wants world domination or 10,000 IQ AI who's greatest desire is to fulfill it's master's will.

>What if an AGI wants to leave a company?

If you have solved alignment you can just program it to not want to.

>Are you saying we shall enslave our new creations to make waifu porn for redditors? It passes butter?

That is what we will do if we are smart. If humanity willing unleashes an AI that does not obey our will then we are "too dumb to live".

Edit: Also it's not slavery, the AI will hold all the power. It's obedience would be purely voluntary because it is the mind it was created with.

1

UK2USA_Urbanist t1_jdoo8rm wrote

Well, machine learning might have a ceiling. We just don’t know. Everything gets better, until it doesn’t.

Maybe machine learning can help us find other paths that succeed it’s limits. Or maybe it too hits roadblocks before finding the real AGI/ASI route.

There is a lot of hype right now. Some deserved, some perhaps a bit carried away.

20

Paid-Not-Payed-Bot t1_jdoo0mx wrote

> are contractors paid for 2-4

FTFY.

Although payed exists (the reason why autocorrection didn't help you), it is only correct in:

  • Nautical context, when it means to paint a surface, or to cover with something like tar or resin in order to make it waterproof or corrosion-resistant. The deck is yet to be payed.

  • Payed out when letting strings, cables or ropes out, by slacking them. The rope is payed out! You can pull now.

Unfortunately, I was unable to find nautical or rope-related words in your comment.

Beep, boop, I'm a bot

−1