Recent comments in /f/MachineLearning
royalemate357 t1_j9rphfc wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I think the biggest danger isn't AIs/AGIs pursuing their own goals/utility functions that involve turning all humans into paperclips. I think the "predict-the-next-word" AIs that are currently the closest thing to AGI aren't capable of recursively self improving arbitrarity, nor is there evidence AFAIK that they pursue their own goals.
Instead the danger is in people using increasingly capable AIs to pursue their own goals, which may or may not be benign. Like, the same AIs that can cure cancer can also create highly dangerous bioweapons or nanotechnology.
Tonkotsu787 t1_j9rolgt wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Check out Paul Christiano. His focus is on ai-alignment and, in contrast to Eliezer, he holds an optimistic view. Eliezer actually mentions him in the bankless podcast you are referring to.
This interview of him is one of the most interesting talks about AI I’ve ever listened to.
And here is his blog.
wind_dude t1_j9ro57j wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
No, absolutely not. First AGI is just a theory, it's not possible on modern logic based hardware, quantum is a possibility. Even if we do achieve it, it's fragile, just unplug it. 2nd, we've had nuclear weapons for close to 80 years, and we're still here, that's a much more real and immediate threat to our demise.
​
As a thought experiment, it's not bad...
Small-Fall-6500 t1_j9ro4tl wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
About a year or two ago, we were so far away from having an AI model that could reliably and easily produce high quality artwork that almost no one was thinking about AI art generators.
Then diffusion models became a thing.
AGI could easily be very similar; it could take decades to discover what is required to make an AGI, or just a few more years. But AGI is not quite like diffusion models, because a diffusion model can’t create and carry out a plan to convert every single living thing into computronium or whatever helps maximize its utility function.
Ferocious_Armadillo t1_j9rlhzx wrote
Reply to comment by bubudumbdumb in [D] Can ML be useful in spectra analysis? by NotSoChildishRubino
I might be off base here but my first thought was there might be something there with integrating the full area of peaks and sorting out peaks from specific elements in the spectral analysis of a heterogeneous mixture (possibly through a Fourier transform or convolution? This is ringing bells for me as feeling similar to signal processing…)
thecuteturtle t1_j9rkmwo wrote
Reply to comment by bubudumbdumb in [D] Can ML be useful in spectra analysis? by NotSoChildishRubino
There are some chemicals and mixtures whose bands could overlap and make it difficult to distinguish between active sites (IE multiple O-H bonds etc.). Still agree that ML with spectral analysis is unnecessary, but that is a possible niche it could have.
Marcapiel t1_j9rjpbi wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
The definition of intelligence is quite simple, we definitely have AI.
CyberPun-K t1_j9rhbqm wrote
N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting
The NHITS model enhances the multi-step forecasting strategy by incorporating innovative hierarchical interpolation and multi-rate data sampling techniques inspired by wavelet analysis.
It assembles its predictions sequentially and emphasizes its components with different frequencies and scales. NHITS significantly improves accuracy in long-horizon forecasting tasks while reducing computation time by orders of magnitude compared to existing neural forecasting approaches.
bubudumbdumb t1_j9rfqhp wrote
Spectral analysis has established methods that are exact and won't benefit from ML. As far as I understand the field that studies approximated or constrained spectral analysis is compressed sensing : that might have overlaps with ML.
MinaKovacs t1_j9rfnej wrote
Reply to comment by sticky_symbols in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
True, but it doesn't matter - it is still just algorithmic. There is no "intelligence" of any kind yet. We are not even remotely close to anything like actual brain functions.
sticky_symbols t1_j9rf56v wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Many thousands of human hours are cheap to buy, and cycles get cheaper every year. So those things aren't really constraints except currently for small businesses.
sticky_symbols t1_j9rezil wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
ML researchers worry a lot less than AGI safety people. I think that's because only the AGI safety people spend a lot of time thinking about getting all the way to agentic superhuman intelligence.
If we're building tools, not much need to worry.
If we're building beings with goals, smarter than ourselves, time to worry.
Now: do you think we'll all stop with tools? Or go on to build cool agents that think and act for themselves?
MinaKovacs t1_j9ref87 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
We are so far away from anything you can really call "AI" it is not on my mind at all. What we have today is simply algorithmic pattern recognition and it is actually really disappointing. The scale of ChatGPT is impressive, but the performance is not. Many many thousands of man-hours were needed to manually tag training datasets. The only place "AI" exists is in the marketing department.
guru_chicken t1_j9reeor wrote
Reply to [D] Are there any good FID and KID metrics implementations existing that are compatible with pytorch? by ats678
Check out sg2ada pytorch implementation on github, or Clean-FID!
buffleswaffles t1_j9rc8aa wrote
Reply to [D] Are there any good FID and KID metrics implementations existing that are compatible with pytorch? by ats678
Try studiogan's implementation on FID
CO2mania t1_j9r6990 wrote
Reply to comment by activatedgeek in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Save the message.
CKtalon t1_j9r2k9j wrote
Reply to [P] What are the latest "out of the box solutions" for deploying the very large LLMs as API endpoints? by johnhopiler
Probably FasterTransformers with Triton Inference Server
KPTN25 t1_j9qy2xi wrote
Reply to comment by dmart89 in [D] Python library to collect structured datasets across the internet by dmart89
> court ruling a year or two ago that concluded that scraping public linkedin profiles is legal
Forgot about this. I may be dating myself with problems of the past.
Still imagine they're doing their best to make it really hard to do, though.
visarga t1_j9qxt97 wrote
Reply to comment by 1973DodgeChallenger in [R] Provable Copyright Protection for Generative Models by vyasnikhil96
Well, you can't. Because it is really hard to extract any verbatim replications of training data from chatGPT. You need to put a considerable portion from the work as prompt, to put the model in the right place, and then sample your way ahead. Doesn't work for most stuff, like 99%.
[deleted] t1_j9qxsw0 wrote
Reply to comment by dmart89 in [D] Python library to collect structured datasets across the internet by dmart89
[removed]
visarga t1_j9qxgt2 wrote
Reply to comment by Disastrous_Elk_6375 in [R] Provable Copyright Protection for Generative Models by vyasnikhil96
If you go down to individual words or characters, everything is reused. If you go up, usually a random 10 word snippet is nowhere else in the internet. But boilerplate and basic things might be replicated in all shapes and forms.
Algoartist t1_j9qxce6 wrote
It's a workshop. Also some events just accept every paper. Concerns only relevant for serious conferences and journals. Also authors are from Pakistan and they go for quantity over quality
rajrondo t1_j9qx48o wrote
Reply to [P] What are the latest "out of the box solutions" for deploying the very large LLMs as API endpoints? by johnhopiler
Not sure if I'm understanding you correctly, but would solutions like https://replicate.com/ or https://dev.pyqai.com/ be useful?
visarga t1_j9qwzlf wrote
Reply to comment by currentscurrents in [R] Provable Copyright Protection for Generative Models by vyasnikhil96
> Honestly, kinda selfish. We'll all benefit from these powerful new tools and I don't appreciate you trying to hamper them.
They took their little pebble from the beach back home, that'll show them.
Jinoc t1_j9rpo3y wrote
Reply to comment by sticky_symbols in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
That’s a misreading of what the AI alignment people say, they’re quite explicit that agency is not necessary for AI risk.