Recent comments in /f/MachineLearning

soricellia t1_j9tomaw wrote

Well I think that entirely depends on what the threat is mate. The probability of AGI rising up terminator style I agree seems pretty small. The probability of disaster due to the inability of humans to distinguish true from false and fact from fiction being exasperated due to AI? That seems much higher. Also, I don't think either of us have a formula for this risk, so I think saying the probability of an event happening is infinitesimal is intellectual fraud.

6

crt09 t1_j9tncbf wrote

"Unsure what kind of goal the AI had in this case"

tbf pretty much any goal that involves you doing something on planet Earth may be interrupted by humans, so to be certain, getting rid of them probably reduces the probability of being interrupted from your goal. I think its a jump that itll be that smart or that the alignment goal we use in the end wont have any easier way to the goal than accepting that interruptibility, but the alignment issue is that it Wishes it was that smart and could think of an easier way around

3

NotSoChildishRubino OP t1_j9t91yd wrote

I was thinking the same way, but then came to me the idea of, once I have detected peaks in a spectrum, I could distinguish peaks of different nature (eg. gaussian vs lorentzian) knowing the peak symmetry, the FWMH, or similar characteristics. I wouldn't be able to quantify the elements but i could use ML at a certain point, i guess.

1

dentalperson t1_j9t6zxx wrote

> can also create highly dangerous bioweapons

EY's example he gave in the podcast was a bioweapon attack. Unsure what kind of goal the AI had in this case, but maybe that was the point:

>But if it's better at you than everything, it's better at you than building AIs. That's snowballs. It gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second. That's the disaster scenario if it's as smart as I am. If it's smarter, it might think of a better way to do things. But it can at least think of that if it's relatively efficient compared to humanity because I'm in humanity and I thought of it.

2

abc220022 t1_j9t681p wrote

The shorter-term problems you mention are important, and I think it would be great for technical and policy-minded people to try to alleviate such threats. But it's also important for people to work on the potential longer term problems associated with AGI.

OpenAI, and organizations like them, are racing towards AGI - it's literally in their mission statement. The current slope of ML progress is incredibly steep. Seemingly every week it looks like some major ML lab comes up with an incredible new capability with only minor tweaks to the underlying transformer paradigm. The longer this continues to happen, the more impressive these capabilities look, and the longer we see scaling curves continue with no clear ceiling, the more likely it looks that AGI will come soon, say, over the next few decades. And if we do succeed at making AI as capable or more capable than us, then all bets are off.

None of this is a certainty. One of Yudkowsky's biggest flaws imo is the certainty with which he makes claims backed with little rigorous argument. But given recent discoveries, the probability of a dangerous long term outcome is high enough that I'm glad we have people working on a solution to this problem, and I hope more people will join in.

1

dentalperson t1_j9t55as wrote

here is a text transcription of the podcast with comments.

You mention EY not being rigorous in his arguments. The timelines/probability of civilization-destroying AGI seem to need more explanation to me as well, but the type of AI safety/alignment problems he describes should be taken seriously by everyone in the community. The timelines for AGI vary in the community, from people that are confident in a AGI capable of complete wipeout of the human race within 15 years, to other 'optimists' in AI safety that think it might take several more decades. Although the timelines for AGI differ, these people mostly agree on the scenarios that they are trying to prevent, because the important ones are obviously possible (powerful things can kill humans; extremely powerful things can kill extreme amounts of humans) and not hard to imagine, such as 'we asked AGI to do harmless task X but even though it's not evil, it killed us as a byproduct of something else it was trying to do after reprogramming itself'. (By the way, the AI safety 'optimists' are still much more pessimistic than the general ML community which thinks it is an insignificant risk.)

There are good resources mentioned in this thread already to get other perspectives. The content is unfortunately mostly scattered in little bits and pieces over the internet. If you like popular book format/audiobooks, you could start with a longer and more digestible content in Stuart Russell's Human Compatible or Superintelligence from Nick Bostrom (which is a bit dated now, but still well written).

−1

andreichiffa t1_j9t35a6 wrote

No. As a matter of fact, I consider it harmful, and I am far from being alone in that regard.

What you need to understand is that AI* kills already. Not only military/law enforcement AI that misidentifies people and leads to them being killed / searched & killed / empoisoned & killed in prison, the types of AI that you interact on a daily basis. Recommendation algorithms that promote disinformation regarding vaccines safety and COVID risk killed hundreds of thousands. Medical AIs that are unable to identify sepsis in 70% of cases but are widely used and override doctors in hospitals have killed thousands. Tesla autopilot AIs that kill their passengers on a regular basis. Conversational agent LLMs that will tell the users how to do electric work and kill them in the process.

But here is the thing. Working on the safety of such AIs leads to a conflict - with the engineers and researchers developing them, with execs that greenlight them, with influencers that touted them, with stakeholders who were getting money from additional sales the AI feature has generated. So safety and QA teams get fired, donations get made to universities to get rid of particularly vocal current state of affairs critics, Google de-indexes their works and Facebook randomly and accidentally deletes their posts (Bengio vs LeCun circa 2019, I believe, and the reason the latter moved to Twitter).

The problem with super-human AGI folks (and generally the longtermism/EA, to which Eliezer Yudkowsky belongs), is that they claim that none of those problems matter, because if SH-AGI arises, if it decides to mingle into human affairs, if we don't have an enclaves free from it, and even if it occurs in 100 years, it will be so bad, that it will make everything else irrelevant.

That's a lot of "ifs". And a long timeline. And there are pretty good theoretical reasons to believe that even when SG-AGI arises, its capabilities would not be as extensive as EA crowd claims (impossibility theorems and Solomonoff computability support wrt energy and memory support). And then there are theoretical guarantees as to why we won't be able to prevent it now even if it started to emerge (Godel's incompletness).

But in principle - yeah, sure why not, you never know if something interesting pops along the way.

The problem is that in the way it is currently formulated and advertised, it hits the cultural memes (HAL, A.I., ..) and the A-type personalities of younger engineers and researchers (work on the **most important** problem likely to make you **most famous**) in a way that completely drowns out the problems with AI that are already here - both from the general public's and engineer's perspective.

It is perhaps not a coincidence that a lot of entities that would stand to loose in reputation/income from in-depth looks into current AIs safety and alignment are donating quite a lot to EA/long-termism and lending them of their own credibility.

*To avoid sterile semantic debates, to me an AI is any non-explicitly coded programs that perform decisions on its own. Hence LLMs without a sampler are non-AI ML, whereas generative LLMs with a sampler are AI (generative ML).

3

PassionatePossum t1_j9t1e09 wrote

I have seen image classification networks being used to classify sounds via spectrograms. It is perfectly conceivable to use ML to analyze spectrograms or to manipulate them and turn them back into sounds. Of course you can also do that directly by using time-series models.

But as long as you have a problem that can be modeled mathematically, you are usually better off to stick to mathematical models. They are usually more computationally efficient and predictable.

1

etesian_dusk t1_j9szzww wrote

I wouldn't trust Yudkowski to build an MLP classifier from scratch. Hell, I wouldn't trust him solving the Titanic task on Kaggle with SciKit learn.

He's a captivating speaker, but similar to some other popular cultural-scientific faces (e.g. Jordan Peterson) I feel like he is 100% form, 0% substance (or the substance that is valuable, is not original, but common knowledge repackaged).

If you don't tune in to people without a Bachelor's degree giving health advice, I don't know why you should care of Yudkowski's view of AI.

4