Recent comments in /f/MachineLearning

MuonManLaserJab t1_j9udmcp wrote

> EY tends to go straight to superintelligent AI robots making you their slave.

First, I don't think he ever said that they will make us slaves, except possibly as a joke at the expense of people who think the AI will care about us or need us enough to make us slaves.

Second, I am frustrated by the fact that you seem to think that only the short-term threats matter. What's a more short-term threat: nuclear contamination because of the destruction of the ZPP in Ukraine, or all-out nuclear war? Contamination is more likely, but that doesn't mean that we wouldn't be stupid to ignore the potentially farther away yet incredibly catastrophic outcome of nuclear war. Why can you not be worried about short-term AI issues but also acknowledge the possibility of the slightly longer term risk of superintelligent AI?

This is depressingly typical as an attitude and not at all surprising as the top comment here, unfortunately.

4

Appropriate_Ant_4629 t1_j9ubt3u wrote

> I understand that I am not personally helping the situation but I am not going to take a huge paycut to work on those problems especially when that paycut would be at my expense

I think you have this backwards.

Investment Banking and the Defense Industry are two of the richest industries in the world.

> Those models are being built by contractors who are subcontracting that work out which means its being built by people who are not getting paid well ie not senior or experienced folks.

The subcontractors for that autonomous F-16 fighter from the news last month are not underpaid, nor are the Palantir guys making the software used to target who autonomous drones hit, nor are the ML models guiding real-estate investment corporations that bought a quarter of all homes this year.

It's the guys trying to do charitable work using ML (counting endangered species in national parks, etc) that are far more likely to be the underpaid interns.

3

pyepyepie t1_j9uanug wrote

In all honesty, at some point, any type of evaluation that is not qualitative is simply a joke. I have observed it a long time ago while working on NMT and trying to base the results on BLEU score - it literally meant nothing. Trying to force new metrics based on simple rules or computation will probably fail - I believe we need humans or stronger LLMs in the loop. E.g., humans should rank the output of multiple LLMs and the same humans should do so for multiple different language models, not just for the new one. Otherwise, I view it as a meaningless self-promoting paper (LLMs are not interesting enough to read about if there are no new ideas and no better performance). Entropy is good for language models that are like "me language model me no understand world difficult hard", not GPT-3 like.

Edit: this semantic uncertainty looks interesting but I would still rather let humans rank the results.

8

Jinoc t1_j9u8ces wrote

That’s… not what his followers are saying. The hand-wringing about Bing hasn’t been about its misalignment per se, but about what it proves about the willingness of Microsoft and OpenAI to rush defective product release in an arms race situation. It’s not that the alignment is bad, it’s that clearly it didn’t register as a priority in the eyes of leadership, and it’s dangerous to expect that things will get better as AI get more capable.

6

perspectiveiskey t1_j9u6u27 wrote

Let me flip that on its head for you: what makes you think that the Human-like AI is something you will want to be your representative?

What if it's a perfect match for Jared Kushner? Do you want Jared Kushner representing us on Alpha Centauri?

Generally, the whole AI is fine/is not fine debate always comes down to these weird false dichotomies or dilemnas. And imo, they are always rooted in the false premise that what makes humans noble - what gives them their humanity - is their intelligence.

Two points: a) AI need not be human like to have the devastating lethality, and b) a GAI is almost certainly not going to be "like you" in the way that most humans aren't like you.

AI's lethality comes from its cheapness and speed of deployment. Whereas a Jared Kushner (or insert your favorite person to dislike) takes 20 years to create out of scratch, AI takes a few hours.

2

maxToTheJ t1_j9u6gj5 wrote

> Ones that unfairly send certain people to jail, ones that re-enforce unfair lending practices, ones that will target the wrong people even more aggressively than humans target the wrong people today.

Those examples are what I was alluding to with maybe a little too much hyperbole with saying “interns”. The most senior or best people are absolutely not building those models. Those models are being built by contractors who are subcontracting that work out which means its being built by people who are not getting paid well ie not senior or experienced folks.

Those jobs aren’t exciting and arent being rewarded financially by the market and I understand that I am not personally helping the situation but I am not going to take a huge paycut to work on those problems especially when that paycut would be at my expense for the benefit of contractors who have been historically scummy.

−2

terath t1_j9u4o7b wrote

If we're getting philosophical, in a weird way if we ever do manage to build human-like AI, and I personally don't believe were at all close yet, that AI may well be our legacy. Long after we've all died that AI could potentially still survive in space or in environments we can't.

Even if we somehow survive for millenia, it will always be near infeasible for us to travel the stars. But it would be pretty easy for an AI that can just put itself in sleep mode for the time it takes to move between system.

If such a thing happens, I just hope we don't truly build them in our image. The universe doesn't need such an aggressive and illogical species spreading. It deserves something far better.

1

perspectiveiskey t1_j9u1r9n wrote

AI reduces the "proof of work" cost of an Andrew Wakefield paper. This is significant.

There's a reason people don't dedicate long hours to writing completely bogus scientific papers which will result in literally no personal gain: it's because they want to live their lives and do things like have a BBQ on a nice summer day.

The work involved in sounding credible and legitimate is one of the few barriers holding the edifice of what we call Science standing. The other barrier is peer review...

Both of these barriers are under a serious threat by the ease of generation. AI is our infinite monkeys on infinite typewriters moment.

This is to say nothing of much more insidious and clever intrusions into our thought institutions.

2

MrAcurite t1_j9tzdl6 wrote

Eliezer Yudkowsky didn't attend High School or College. I'm not confident he understands basic Calculus or Linear Algebra, let alone modern Machine Learning. So yes, I will dismiss his views without seriously engaging with them, for the same reason that any Physics professor will dismiss emails from cranks talking about their "theories."

1

Tseyipfai t1_j9tvfby wrote

Re: Things that will happen rather soon, I think it's important that we also look at AI's impact on nonhuman animals. I argued it in this paper. AI-controlled drones are already killing animals, some AIs are helping factory farming, language models are showing speciesist patterns that might reinforce people's bad attitudes toward some animals (ask ChatGPT about recipes of dog meat vs chicken meat, or just google "chicken" to see whether you see mainly the animal or their flesh)

Actually, even for things that could happen in the far future, I think it's extremely likely that what AI will do will immensely impact nonhuman animals too.

1

SleekEagle t1_j9tttxr wrote

Until the tools start exhibiting behavior that you didn't predict and in ways that you have no control over. Not taking an opinion on which side is "right", just saying that this is a false equivalence with respect to the arguments that are being made.

​

EDIT: Typo

6