Recent comments in /f/MachineLearning

royalemate357 t1_j9rphfc wrote

I think the biggest danger isn't AIs/AGIs pursuing their own goals/utility functions that involve turning all humans into paperclips. I think the "predict-the-next-word" AIs that are currently the closest thing to AGI aren't capable of recursively self improving arbitrarity, nor is there evidence AFAIK that they pursue their own goals.

Instead the danger is in people using increasingly capable AIs to pursue their own goals, which may or may not be benign. Like, the same AIs that can cure cancer can also create highly dangerous bioweapons or nanotechnology.

45

Tonkotsu787 t1_j9rolgt wrote

Check out Paul Christiano. His focus is on ai-alignment and, in contrast to Eliezer, he holds an optimistic view. Eliezer actually mentions him in the bankless podcast you are referring to.

This interview of him is one of the most interesting talks about AI I’ve ever listened to.

And here is his blog.

33

wind_dude t1_j9ro57j wrote

No, absolutely not. First AGI is just a theory, it's not possible on modern logic based hardware, quantum is a possibility. Even if we do achieve it, it's fragile, just unplug it. 2nd, we've had nuclear weapons for close to 80 years, and we're still here, that's a much more real and immediate threat to our demise.

​

As a thought experiment, it's not bad...

−16

Small-Fall-6500 t1_j9ro4tl wrote

About a year or two ago, we were so far away from having an AI model that could reliably and easily produce high quality artwork that almost no one was thinking about AI art generators.

Then diffusion models became a thing.

AGI could easily be very similar; it could take decades to discover what is required to make an AGI, or just a few more years. But AGI is not quite like diffusion models, because a diffusion model can’t create and carry out a plan to convert every single living thing into computronium or whatever helps maximize its utility function.

11

Ferocious_Armadillo t1_j9rlhzx wrote

I might be off base here but my first thought was there might be something there with integrating the full area of peaks and sorting out peaks from specific elements in the spectral analysis of a heterogeneous mixture (possibly through a Fourier transform or convolution? This is ringing bells for me as feeling similar to signal processing…)

1

CyberPun-K t1_j9rhbqm wrote

N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting

The NHITS model enhances the multi-step forecasting strategy by incorporating innovative hierarchical interpolation and multi-rate data sampling techniques inspired by wavelet analysis.

It assembles its predictions sequentially and emphasizes its components with different frequencies and scales. NHITS significantly improves accuracy in long-horizon forecasting tasks while reducing computation time by orders of magnitude compared to existing neural forecasting approaches.

0

sticky_symbols t1_j9rezil wrote

ML researchers worry a lot less than AGI safety people. I think that's because only the AGI safety people spend a lot of time thinking about getting all the way to agentic superhuman intelligence.

If we're building tools, not much need to worry.

If we're building beings with goals, smarter than ourselves, time to worry.

Now: do you think we'll all stop with tools? Or go on to build cool agents that think and act for themselves?

0

MinaKovacs t1_j9ref87 wrote

We are so far away from anything you can really call "AI" it is not on my mind at all. What we have today is simply algorithmic pattern recognition and it is actually really disappointing. The scale of ChatGPT is impressive, but the performance is not. Many many thousands of man-hours were needed to manually tag training datasets. The only place "AI" exists is in the marketing department.

−7