Recent comments in /f/MachineLearning
icedrift t1_j9uuocn wrote
Reply to comment by darthmeck in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Appreciate it! Articulation isn't a strong suit of mine but I guess a broken clock is right twice a day
[deleted] t1_j9uulfb wrote
SeBAGeNetiC t1_j9uu6j1 wrote
Reply to [D] Simple Questions Thread by AutoModerator
I need to categorize products according to their product name.
I have a huge amount of data hand categorized by humans.
I am a Python developer but my knowledge on this matter is zero.
Do you have any recommendation as to where to start? any topic, reading or resource is welcome.
Of course I've been researching myself and it seems MuiltiClass Classification is what I need. But I'd like some extra opinions and pointers.
Are there any cloud services I could leverage by paying? This is also an option.
sbb_ml t1_j9uu0jo wrote
An old one
darthmeck t1_j9utkza wrote
Reply to comment by icedrift in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Very well articulated.
Imnimo t1_j9upa4x wrote
Reply to comment by Jinoc in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
My point is that this isn't even misalignment in the first place. No more than an Imagenet classifier with 40% accuracy is misaligned. Misalignment is supposed to be when a model's learned objective is different from the human designer's objective. In their desperation to see threats everywhere, EZ et al resort to characterizing poor performance as misalignment.
wind_dude t1_j9up1ux wrote
Reply to comment by SleekEagle in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> Until the tools start exhibiting behavior that you didn't predict and in ways that you have no control over.
LLMs already do behave in ways we don't expect. But they are much more than a hop skip, a jump and 27 hypothetical leaps away from being out of our control.
Yes, people will use AI for bad things, but that's not an inherent property of AI, that's an inherent property of humanity.
filipposML t1_j9uokeu wrote
Reply to comment by mosquitoLad in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
It's what the author of those papers wanted as a name for GANs. Arguably it is more intuitive in a RL context, although I cannot speak about the equivalence as I am not super familiar with GANs.
Top-Perspective2560 t1_j9umv6r wrote
Reply to comment by maxToTheJ in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
My research is in AI/ML for healthcare. One thing people forget is that everyone is concerned about AI/ML, and no-one is happy to completely delegate decision making to an ML model. Even where we have models capable of making accurate predictions, there are so many barriers to trust e.g. Black Box Problem and general lack of explainability which relegate these models to decision-support at best and being completely ignored at worst. I actually think that's a good thing to an extent - the barriers to trust are for the most part absolutely valid and rational.
However, the idea that these models are just going to be running amock is a bit unrealistic I think - people are generally very cautious of AI/ML, especially laymen.
Simcurious t1_j9umv53 wrote
Reply to comment by fmai in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
That's Pascal's wager and could be used to justify belief in hell/god.
icedrift t1_j9umeg4 wrote
Jinoc t1_j9umdg2 wrote
Reply to comment by Imnimo in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
It’s an example of noticing the misalignment, but the alignment is only a problem insofar as it is a symptom of the deeper problem I mentioned.
EY was very explicit that he doesn’t think GPT-style models are any threat whatsoever (the proliferation of convincing but fake text is possibly a societal problem, but it’s not an extinction risk)
Imnimo t1_j9ulhb8 wrote
Reply to comment by Jinoc in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I don't see how it doesn't. Is this not an example of his followers wringing their hands over so-called misalignment that's really just poor performance?
_atswi_ OP t1_j9ukzlk wrote
Reply to comment by pyepyepie in [D] Best Way to Measure LLM Uncertainty? by _atswi_
That's a good point
What sounds like an open problem statement is how to get these LLMs to "quantify" that themselves the same way humans do. It's also interesting how that relates to the broader question of sentience and consciousness.
Top-Perspective2560 t1_j9ukzgc wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
As others have said, the idea of being concerned with AI ethics and safety and taking it seriously is a good thing.
The problem is - and this is Just My Opinion™ - that people like EY are making what basically amount to spurious speculations about completely nebulous topics such as AGI, and they have very little to show in terms of some proof that they actually understand in technical detail where AI/ML is currently and the current SOTA. EY in particular seems to have jumped straight to those topics without any grounding in technical AI/ML research. I can't help but feel that, on some level at least, those topics were chosen based on the fact that it's easy to grab headlines and get into the media by making statements about it.
I'm not saying it's a bad thing to have people like EY around or that he or others like him are bad actors in any way, or that they shouldn't continue doing what they're doing. They may well be correct and their ideas aren't necessarily explicitly wrong. It's just that it's very difficult to genuinely take what they say seriously or make any practical decisions based on it, because a lot of it is so speculative. It reminds me a bit of Asimov's Laws of Robotics - they seemed like they made a lot of sense decades ago before anyone knew how the development of AI/ML would pan out, but in reality they're really just "it would be great if things worked this way" with no practical realistic plan on how to implement them, or even any way to know if they would actually be relevant.
The other thing is, as other people have pointed out, there are immediate and real problems with AI/ML as it stands, and solving those problems or avoiding disaster requires more than just making speculative statements. I think the absence of a will to address those issues by the biggest names in AI/ML ethics and safety is quite conspicuous.
​
Edit: Added a bit about Asimov's Laws of Robotics which occured to me after I made the post.
memberjan6 t1_j9uko15 wrote
Reply to comment by icedrift in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Here's how I would score this passage based on the nine emotions:
Anger: 0 - There's no indication of anger in this statement. Fear: 3 - The passage expresses a sense of worry and concern about the social ramifications of AI that pass as humans, which may reflect some level of fear. Joy: 0 - There's no expression of joy in this statement. Sadness: 0 - There's no indication of sadness in this statement. Disgust: 0 - There's no expression of disgust in this statement. Surprise: 0 - There's no indication of surprise in this statement. Trust: 1 - The passage expresses a concern about a crisis of trust and authenticity in the US, which may reflect some level of trust. Anticipation: 0 - There's no expression of anticipation in this statement. Love: 0 - There's no expression of love in this statement. Please keep in mind that these scores are subjective and based on my interpretation of the text. Different people may score the passage differently based on their own perspectives and interpretations.
Source: chatgpt
Jinoc t1_j9ujftx wrote
Reply to comment by Imnimo in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yes? I fail to see how that goes against what I’m saying.
nikola-b t1_j9ujdux wrote
Reply to comment by tyras_ in [D] Large Language Models feasible to run on 32GB RAM / 8 GB VRAM / 24GB VRAM by head_robotics
There was auth bug in the code. Sorry for that. Please try again now.
mosquitoLad OP t1_j9uiffv wrote
Reply to comment by filipposML in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
Are you referring to this? "Generative Adversarial Networks are Special Cases of Artificial Curiosity (1990) and also Closely Related to Predictability Minimization (1991)"
https://arxiv.org/abs/1906.04493
Looking up IPM verbatim turned up a reddit post linking to that.
gt33m t1_j9ui6id wrote
Reply to comment by terath in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
This is eerily similar to the “guns don’t kill people” argument.
It should be undeniable that AI provides a next-generation tool to lower the cost of disruption for nefarious actors. That disruption can come in various forms - disinformation, cyber crime, fraud etc.
mosquitoLad OP t1_j9uhqag wrote
Reply to comment by Optimal-Asshole in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
Thanks, that helps. I need to give a brief educational presentation about the subject, so I didn't want to throw out the wrong terminology.
filipposML t1_j9uhfac wrote
Reply to [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
Inverse predictability minimisation might be a term if you can get around the controversy. As a bonus, you might make a certain German very happy.
t35t0r t1_j9ugvnl wrote
Reply to comment by saffronanas in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
there's a message under it now
Jinoc t1_j9uvzpb wrote
Reply to comment by Imnimo in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
But that’s a semantic disagreement on the proper use of “misalignment”, the substantive risk posed by the incentives of an AI arms race are the problem.