Recent comments in /f/MachineLearning

SeBAGeNetiC t1_j9uu6j1 wrote

I need to categorize products according to their product name.
I have a huge amount of data hand categorized by humans.
I am a Python developer but my knowledge on this matter is zero.
Do you have any recommendation as to where to start? any topic, reading or resource is welcome.
Of course I've been researching myself and it seems MuiltiClass Classification is what I need. But I'd like some extra opinions and pointers.
Are there any cloud services I could leverage by paying? This is also an option.

1

Imnimo t1_j9upa4x wrote

My point is that this isn't even misalignment in the first place. No more than an Imagenet classifier with 40% accuracy is misaligned. Misalignment is supposed to be when a model's learned objective is different from the human designer's objective. In their desperation to see threats everywhere, EZ et al resort to characterizing poor performance as misalignment.

1

wind_dude t1_j9up1ux wrote

> Until the tools start exhibiting behavior that you didn't predict and in ways that you have no control over.

LLMs already do behave in ways we don't expect. But they are much more than a hop skip, a jump and 27 hypothetical leaps away from being out of our control.

Yes, people will use AI for bad things, but that's not an inherent property of AI, that's an inherent property of humanity.

1

Top-Perspective2560 t1_j9umv6r wrote

My research is in AI/ML for healthcare. One thing people forget is that everyone is concerned about AI/ML, and no-one is happy to completely delegate decision making to an ML model. Even where we have models capable of making accurate predictions, there are so many barriers to trust e.g. Black Box Problem and general lack of explainability which relegate these models to decision-support at best and being completely ignored at worst. I actually think that's a good thing to an extent - the barriers to trust are for the most part absolutely valid and rational.

However, the idea that these models are just going to be running amock is a bit unrealistic I think - people are generally very cautious of AI/ML, especially laymen.

1

Jinoc t1_j9umdg2 wrote

It’s an example of noticing the misalignment, but the alignment is only a problem insofar as it is a symptom of the deeper problem I mentioned.

EY was very explicit that he doesn’t think GPT-style models are any threat whatsoever (the proliferation of convincing but fake text is possibly a societal problem, but it’s not an extinction risk)

3

_atswi_ OP t1_j9ukzlk wrote

That's a good point

What sounds like an open problem statement is how to get these LLMs to "quantify" that themselves the same way humans do. It's also interesting how that relates to the broader question of sentience and consciousness.

1

Top-Perspective2560 t1_j9ukzgc wrote

As others have said, the idea of being concerned with AI ethics and safety and taking it seriously is a good thing.

The problem is - and this is Just My Opinion™ - that people like EY are making what basically amount to spurious speculations about completely nebulous topics such as AGI, and they have very little to show in terms of some proof that they actually understand in technical detail where AI/ML is currently and the current SOTA. EY in particular seems to have jumped straight to those topics without any grounding in technical AI/ML research. I can't help but feel that, on some level at least, those topics were chosen based on the fact that it's easy to grab headlines and get into the media by making statements about it.

I'm not saying it's a bad thing to have people like EY around or that he or others like him are bad actors in any way, or that they shouldn't continue doing what they're doing. They may well be correct and their ideas aren't necessarily explicitly wrong. It's just that it's very difficult to genuinely take what they say seriously or make any practical decisions based on it, because a lot of it is so speculative. It reminds me a bit of Asimov's Laws of Robotics - they seemed like they made a lot of sense decades ago before anyone knew how the development of AI/ML would pan out, but in reality they're really just "it would be great if things worked this way" with no practical realistic plan on how to implement them, or even any way to know if they would actually be relevant.

The other thing is, as other people have pointed out, there are immediate and real problems with AI/ML as it stands, and solving those problems or avoiding disaster requires more than just making speculative statements. I think the absence of a will to address those issues by the biggest names in AI/ML ethics and safety is quite conspicuous.

​

Edit: Added a bit about Asimov's Laws of Robotics which occured to me after I made the post.

3

memberjan6 t1_j9uko15 wrote

Here's how I would score this passage based on the nine emotions:

Anger: 0 - There's no indication of anger in this statement. Fear: 3 - The passage expresses a sense of worry and concern about the social ramifications of AI that pass as humans, which may reflect some level of fear. Joy: 0 - There's no expression of joy in this statement. Sadness: 0 - There's no indication of sadness in this statement. Disgust: 0 - There's no expression of disgust in this statement. Surprise: 0 - There's no indication of surprise in this statement. Trust: 1 - The passage expresses a concern about a crisis of trust and authenticity in the US, which may reflect some level of trust. Anticipation: 0 - There's no expression of anticipation in this statement. Love: 0 - There's no expression of love in this statement. Please keep in mind that these scores are subjective and based on my interpretation of the text. Different people may score the passage differently based on their own perspectives and interpretations.

Source: chatgpt

0

mosquitoLad OP t1_j9uiffv wrote

Are you referring to this? "Generative Adversarial Networks are Special Cases of Artificial Curiosity (1990) and also Closely Related to Predictability Minimization (1991)"

https://arxiv.org/abs/1906.04493

Looking up IPM verbatim turned up a reddit post linking to that.

3

gt33m t1_j9ui6id wrote

This is eerily similar to the “guns don’t kill people” argument.

It should be undeniable that AI provides a next-generation tool to lower the cost of disruption for nefarious actors. That disruption can come in various forms - disinformation, cyber crime, fraud etc.

3