Recent comments in /f/MachineLearning

PiGuyInTheSky t1_j9sx3nd wrote

I'd like to link Robert Miles' Intro to AI Safety (and his YouTube channel in general) as an accessible and well-presented way to learn about technical risk in AI Safety. As a field without a clear prevailing paradigm, there are many diverse viewpoints, of which EY's is just one. There are philosophical problems to solve, yes, but there are also very technical problems to solve, like power-seeking or inner misalignment or mechanistic interpretability that are much less funded than traditional capabilities research.

In general, taking risks with high stakes without thinking enough about it is just... kind of reckless, whether you're an individual or a company or a country or a nuclear physicist etc. We've already demonstrated in real systems (eg Bing or even social media recommender systems) that AI can be harmful and not behave as intended. I think it's just prudent of us to at least try and be careful, y'know, slow down and do some safety research, before doing things that might irreversibly change the world.

Like, imagine if a civil engineer just drew up the blueprint for a bridge without considering its stability, or weight, or materials, and it just got built 'cause there's no regulation against building unsafe bridges, it's much easier to build dangerous bridges than strong ones, anyone has access to the tools to build a bridge, lots of people think that building "safe" bridges = building beautiful bridges, etc. From a very high perspective this situation (which I hope you agree sounds quite silly) is remarkably similar to that of AI research today.

11

DeMorrr t1_j9sww75 wrote

There's this one time where the company I'm applying to actually asked me to talk about spam classification, and so I did, as best as I could. but after the interview, they completely ghosted me. Man these food companies are the worst.

34

governingsalmon t1_j9svhv8 wrote

I agree that we don’t necessarily need AI for nefarious actors to spread scientific misinformation, but I do think AI introduces another tool or weapon that could used by the Andrew Wakefields of the future in a way that might pose unique dangers to public health and public trust in scientific institutions.

I’m not sure whether it was malevolence or incompetence that has mostly contributed to vaccine misinformation, but if one intentionally sought to produce fake but convincing scientific-seeming work, wouldn’t something like a generative language model allow them to do so at a massively higher scale with little knowledge of a specific field?

I’ve been wondering what would happen if someone flooded a set of journals with hundreds of AI-written manuscripts without any real underlying data. One could even have all the results support a given narrative. Journals might develop intelligent ways of counteracting this but it might pose a unique problem in the future.

3

HINDBRAIN t1_j9sthbq wrote

"Your discarded toenail could turn into Keratinator, Devourer of Worlds, and end all life in the galaxy. We need agencies and funding to regulate toenails."

"That's stupid, and very unlikely."

"You are dismissing the scale of the threat!"

−5

Soc13In t1_j9stb29 wrote

Much like that there are issues like what recommender systems are recommending, how credit models are scoring, why your resumes are being discarded without being seen by a human being and lots of other minor mundane daily things that we take for granted and are actually dystopic for the people at the short end of the stick. These are systems that need to be fine tuned and we treat their judgements as holy lines in the Commandment stones. The AI dystopia is already real.

5

needlzor t1_j9sspwd wrote

Surprised I had to scroll down this much to see this opinion, which I agree completely with. The danger I worry about most isn't superintelligent AI, it's people like Yudkowsky creating their little cults around the potential for superintelligent AI.

9

okokoko t1_j9srgl5 wrote

>Meanwhile, if alignment is impossible, ordinary people who have access to these hypothetical future 'superintelligences' can convince these entities to do things that they like

Interesting, how are you gonna "convince" an unaligned AI though, I wonder. I feel like there is a flaw in your reasoning here

5

FinancialElephant t1_j9sqtwq wrote

I don't know anything about him when it comes to alignment. Seems like a lot of unrigorous wasted effort at first glance, but I haven't really had the time or desire to look into it.

The overbearing smugness of Inadequate Equilibria was nauseating. It was unreadable, even for poop reading. The guy is really impressed with himself for believing he came up with theories that have existed for a long time, but that he was too lazy and too disrespectful to research. I will admit there were a couple good snippets in the book (but given the general lack of originality, can we really be sure those snippets were original?).

>When things suck, they usually suck in a way that's a Nash Equilibrium.

There you go, I just saved you a couple hours.

What has EY actually done or built? He seems like one of those guys that wants to be seen as technical or intellectual but hasn't actually built anything or done anything other than nebulously / unrigorously / long-windedly discuss ideas to make himself sound impressive. Kinda like the Yuval Noah Harari of AI.

17

fmai t1_j9sns4a wrote

IMO, even if ML researchers assigned only an 0.1% chance to AI wiping out humanity, the cost of that happening is so unfathomably large that it would only be rational to shift a lot of resources from AI capability research to AI safety in order to drive that probability down.

If you tell people that society needs to do a lot less of the thing that is their job, it's no surprise they dismiss your arguments. The same applies to EY to some extent; I think it would be more reasonable to allow for a lot more uncertainty on his predictions, but would he then have the same influence?

Rather than giving too much credit to expert opinions, it's better to look at the evidence from all sides directly. You seem already be doing that, though :-)

2

ErinBLAMovich t1_j9snb17 wrote

Maybe when an actual expert tells you you're overreacting, you should listen.

Are you seriously arguing that the modern world is somehow corrupted by some magical unified "postmodern philosophy"? We live in the most peaceful time in recorded history. Read "Factfulness" for exact figures. And while you're at it, actually read "Black Swan" instead of throwing that term around because you clearly need to a lesson on measuring probability.

If you think AI will be destructive, outline some plausible and SPECIFIC scenarios how this could possibly happen, instead of your vague allusions to philosophy with no proof of causality. We could then debate the likelihood of each scenario.

16

VioletCrow t1_j9smth5 wrote

> , I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".

I mean, just look at the current anti-vaccine movement. You just described the original Andrew Wakefield paper about vaccines causing autism. We don't need AI for this to happen, just a very credulous and gullible press.

8

s3xysteak t1_j9smju9 wrote

I am a Software Engineering student. I want to use Yolov8 to make a barbell move track tracker as my homework. Here are more details:

Input: The user can upload a video of using barbell to exercise at the frontend (A webpage).

Handle: Get the video, handled it by yolov8, and get a new video which draw the movement track of barbell with lines.

output: The user can get the new video at the webpage.

How to use yolov8 to handle it? Where can I find the model? If I need to train a model by myself, where to find the dataset?

Thanks for you guys' help. You are all gigachad.

1

Appropriate_Ant_4629 t1_j9slydb wrote

> I worry about a lot of bad AI/ML made by interns making decisions that have huge impact like in the justice system, real estate ect.

I worry more about those same AIs made by the senior-architects, principal-engineers, and technology-executives rather than the interns. It's those older and richer people whose values are more likely to be archaic and racist.

I think the most dangerous ML models in the near term will be made by highly skilled and competent people whose goals aren't aligned with the bulk of society.

Ones that unfairly send certain people to jail, ones that re-enforce unfair lending practices, ones that will target the wrong people even more aggressively than humans target the wrong people today.

35