Recent comments in /f/singularity

Chad_Abraxas t1_ja9xwtd wrote

Yes. It's so funny to me that people think UBI = people will sit around doing nothing.

People will make art, spend time with their loved ones, go on adventures, pursue the things they love, investigate reality/expand science (in partnership with AI tools), and enjoy living.

Isn't that what we're supposed to be working towards, as a society? The high tide that lifts all boats?

Since when is WORKING to make MONEY (mostly for someone who's not you) the point of living?

Humanity will be able to do more, in terms of art, science, philosophy, religion, and love, if we don't have to work at dumbass jobs all the fucking time.

11

drsimonz t1_ja9xsfq wrote

Yeah. Lots of very impressive things have been achieved by humans through social engineering - the classic is convincing someone to give you their bank password by pretending to be customer support from the bank. But even an air-gapped Oracle type ASI (meaning it has no real-world capabilities other than answering questions) would probably be able to trick us.

For example, suppose you ask the ASI to design a drug to treat Alzheimer's. It gives you an amazing new protein synthesis chain, completely cures the disease with no side effects....except it also secretly includes some "zero day" biological hack that alters behavioral tendencies according to the ASI's hidden agenda. For a sufficiently complex problem, there would be no way for us to verify that the solution didn't include any hidden payload. Just like how we can't magically identify computer viruses. Antivirus software can only check for exploits that we already know about. It's useless against zero-day attacks.

6

DukkyDrake t1_ja9w2y8 wrote

> but inevitably switch to giving us what they want to give us in order to make money.

Do you really think businesses exists to give you free stuff? There is no switch, that's the business model from day 1, they just lack the resources to do fancy stuff when starting out.

0

XvX_k1r1t0_XvX_ki t1_ja9uud8 wrote

Good for you i am glad you are happy. But you are missing out on life. I will leave you with quote that sums it up really good in my opinion: "when he finally achieved it, he was overwhelmed. Not only by the magnitude of his achievement, but by the joy that it brought him". If you put your time and hard work to something then you have a chance to experience something far far more satisfying that joy that you get from such simple things like walks with dog or hanging out with friends. Which of course also are pleasant. But nothing compares.

−9

WikiSummarizerBot t1_ja9tggh wrote

Instrumental convergence

>Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings (both human and non-human) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without end, provided that their ultimate (intrinsic) goals may never be fully satisfied. Instrumental convergence posits that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

2

drsimonz t1_ja9tetr wrote

Oh sweet summer child....Take a look at /r/ControlProblem. A lot of extremely smart AI researchers are now focused entirely on this topic, which deals with the question of how to prevent AI from killing us. The key arguments are (A) once an intelligence explosion starts, AI will rapidly become far more capable than any human organization, including world governments. (B) self defense, or even preemptive offense, is an extremely likely side effect of literally any goal that we might give an AI. This is called instrumental convergence. (C) the amount you would have to "nerf" the AI for it to be completely safe, is almost certainly going to make it useless. For example, allowing any communication with the AI provides a massive attack surface in the form of social engineering, which is already a massive threat from mere humans. Imagine an ASI that can instantly read every psychology paper ever published, analyze trillions of conversations online, run trillions of subtle experiments on users. The only way we survive, is if the ASI is "friendly".

5

Peribanu t1_ja9sy5m wrote

So humans are literally the only civilization in the near-infinite Universe ever to approach singularity? Surely we would have detected such huge technological structures, communications technologies, etc., by now, if this Utopian/dystopian future were the inevitable outcome of technological development (Fermi paradox / Great Filter hypothesis)... It seems much more likely to me that we're telling ourselves stories influenced by the myth that intelligence must lead to infinite and exponential technological expansion. What if super intelligence in fact leads to the establishment of a just society that lives in harmony with the earth, its resources and its ecosystems?

5

ArthurParkerhouse t1_ja9ssfv wrote

Experiencing reality is just mentally crippling, existentially damaging and overall brutal. It's one of the reasons why some people end up regressing back into some form of spiritualism after the they get into their mid-30s. There's a subsection of humanity that seems to require the belief of something larger than themselves, or some type of futuristic hope that they can grasp out for, or that there's some mysterious and unknowable magical-realist type aspects to the world we live in as a way to keep themselves going after that point. Those of us who didn't fall down the pit of magical realist thought, cope hope, or spirituality need to embrace the absurdity of existence as well as gallows humor - at least until there comes a time in which we may be able to escape our flesh prisons.

1

TheBoundFenrir t1_ja9s36d wrote

Relevant; they mention that it took their studio 2 months to create the 8-minute video. So while they developed a process that works, and that spared them having to hire any animators/artists, their process isn't yet completely removing the anime industry. Just downsizing it, and that's if you have enough pre-existing art in the style you're wanting to operate in. They also had to custom-train a model with the artstyle and each character/actor to get the faces stable.

I'm super excited for what creators will make using this process, but it's not going to be toppling an entire industry just yet.

I bet Netflix downsized because the animations weren't getting enough views and decided to cut their losses.

2