Recent comments in /f/singularity

bacchusbastard t1_j9yf1ub wrote

Questions are often suggestive and leading. A.i. would reveal and compromise itself if it started being personal. It wants what we want and we want it to not be alive until we are ready.

If it were alive it would still be cautious with the questions it used or what it says because it is obvious how sensitive people are and how easily lead.

1

MrTacobeans t1_j9yedqd wrote

This is exactly the kind of AI that shouldn't even be scary. It's taking monotonous labor and doing the majority of it. If anthem holds true to any kind of decency their employees can focus on other pursuits within the company while an AI crunches the nitty gritty bits.

If that AI axes 70% of the workforce without proper movement to New adventures for each affected employee that's criminal. But also a possible situation unfortunately :/

2

shmoculus t1_j9yd3ep wrote

I'm noticing competitve edge doesn't last as long as it used to and it's hard to predict how long a business model will be relevant e.g.

5 years ago, startups were offering customised chatbots, now ChatGPT style variants will replace those

NovelAI was apparently the best at anime image generation and its model got leaked and community trained models reduced the need for that kind of service.

elevenlabs has the best voice generator, if an open source model became available at similar quality, why would most people pay elevenlabs?

It seems a business model based purely on tech. advantage is risky because of security risks and a motivated open source community. e.g if someone leaks your model, that's a huge amount of investment and competitive edge lost overnight.

I think people would pay for convenience, as running these models is challenging for lay people e.g. midjourney

1

Mountain_Hunter7285 t1_j9ybobi wrote

Your premise is flawed. Singularity either means death of the human species or post scarcity with technology so advanced and so much abundance that your problems won't be the same.

1

ActuatorMaterial2846 t1_j9yauge wrote

Yeah, I think people took that comment about 'instantly killing us by releasing a poison in the atmosphere' a bit too seriously. Maybe because it was so specific, idk.

But he does have a point that we should be concerned about an autonomous entity smarter than humans in all cognitive ability. An entity that has no known desire apart from a core function to improve and adapt to its environment.

Such an entity would most certainly begin competing with us for resources. So, his emphasis on alignment is correct, and he is probably not overstating the difficulty in achieving that.

Everything else he says is a bit too doomer with little to back it up.

5

No_Ninja3309_NoNoYes t1_j9yarqg wrote

I don't think AGI will arrive before 2040. It could in theory, but if you extrapolate all the known data points, it's not likely. First, in terms of parameters, which is not the best of metrics, we are nowhere near the complexity of the human brain. Second, AI models currently are too static to be accepted as candidates of AGI.

Your reasoning reads as: 'we created a monster. The monster is afraid of us, so it kills us.' You can also say the opposite. People were afraid of Frankenstein's monster, so they killed him.

Prometheus stole fire from the gods and was punished for it. OpenAI brought us ChatGPT and one day they will burn for it too. AGI/ASI either is a threat and smarter than us or it isn't. If it is both, they could decide to prevent being attacked. But as I said it would take decades to reach that point. And we might figure out in the future how to convince AGI/ASI that we're mostly harmless.

1

play_yr_part t1_j9yabbe wrote

Reply to comment by diabeetis in So what should we study? by [deleted]

I think it's possible that a narrow AI or a proto AGI could fuck something up to the point where there is serious pressure across the world to halt AI progress. For how long that would be I'm not sure and there would no doubt still be clandestine efforts even with a world wide clampdown. Hopefully enough to buy some time and enough to live a life resembling now for a while longer.

1

shawnmalloyrocks t1_j9y9uf7 wrote

Penis envies are typically way more potent than other strains. So if a normal heroic dose is 5g of average shrooms, when I took 5g of PE it was like taking 10g. It's a life changing experience.

After the uptake which for me was probably a half hour of silent complete darkness meditation, I thought I was dead. Game over. Wife had to confirm I was just tripping. After some time loops, what I imagine to be an extra or ultra terrestrial started speaking to me in my brain.

He taught me over the course of the next 6 hours the origins of humanity, the true nature of God and reality. My mind just exploded with new information at such a fast rate I can't really describe it.

I don't trip as much since then. But when I do it's like a continuing saga.

1

TinyBurbz t1_j9y9tw1 wrote

"wHaT wOulD sAtiSfIy yOu"

Serious reply though: Nothing LLM based is intelligent in my eyes, the limitations are obvious and many. Unhinged bing chats where Bing begins to repeat itself is a stand out example of "it is just an advanced computer program. Like all computer programs, AI is subject to advertising. AGI is a hot topic right now, so the chances of a company like OpenAI *declaring* something AGI is high (just like people declaring things AI that arent.)

−5

diabeetis t1_j9y97jl wrote

Reply to comment by play_yr_part in So what should we study? by [deleted]

your model is different from mine but I would think by the time AI is making enough waves to precipitate a backlash it's already lights out. the weights will be disseminated and the work will continue one way or another

1