Recent comments in /f/MachineLearning
CobaltLemur t1_j9w8mp9 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
The real alignment problem is with us, not AI. The danger isn't that AGI will run amok or indeed do anything unforeseen. Rather, it will do exactly what it asked of it: give a few powerful, unaccountable people even more dangerous ways to wage their useless, petty, destructive squabbles. There is no other reasonable prediction than this: the first serious thing it will be asked to do is think up new weapons. Stuff nobody could have even dreamed of. Think about that.
Imnimo t1_j9w6m9c wrote
Reply to comment by Hyper1on in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
How do you distinguish a behavior which is incentivized by the training objective and behavior that is the result of an optimization shortcoming, and why is it obvious to you that this is the former?
Hyper1on t1_j9w5k2x wrote
Reply to comment by Imnimo in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I mean, it seems like the obvious explanation? That the model's behaviour is incentivised by its training objective. It also seems very plausible: we know that language models at large scale (even if not RL finetuned) exhibit a wide variety of emergent behaviours which you might not guess are motivated by next token prediction, but evidently are instrumental to reducing the loss. This is not necessarily overfitting: the argument is simply that certain behaviour unanticipated by the researchers is incentivised when you minimise the loss function. Arguably, this is a case of goal misgeneralisation: https://arxiv.org/abs/2105.14111
No_Principle9257 t1_j9w53cf wrote
Reply to comment by ZestyData in [P] Minds - A JS library to build LLM powered backends and workflows (OpenAI & Cohere) by gsvclass
Usual js people
leondz t1_j9w1cuu wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Nah, there are a bunch of reasoning steps missing. Conjecture on conjecture on conjecture is tough to work with.
Imnimo t1_j9vzhgy wrote
Reply to comment by Hyper1on in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
So you would argue that the behavior highlighted in the post leads to either a lower loss on language modeling or a lower loss on RL finetuning than the intended behavior? That strikes me as very unlikely.
Kroutoner t1_j9vzb0b wrote
There are scenarios where you would be totally fine not using a validation set, or even any sort of sample splitting whatsoever, but you definitely need to know what you’re doing and know why it’s okay that you’re not using them. If you can’t provide an explicit justification for why it’s okay you’re probably best off using a validation set.
[deleted] OP t1_j9vw5se wrote
Reply to comment by MinaKovacs in [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
[deleted]
mosquitoLad t1_j9vw4lm wrote
Unity using the graphics encoder(C#), SDL2 + ffmpeg(C++), Ogre3D(I believe it uses C++), Blender(Python), or a mixture of the lot. You can render individual images or record a video.
[deleted] OP t1_j9vw44q wrote
Reply to comment by AquaBadger in [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
[deleted]
Hyper1on t1_j9vuyzm wrote
Reply to comment by Imnimo in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
The hypothesis is precisely that the failure mode of Bing Chat comes from it being too strong, not too weak. That is, if prompted even in quite vague ways it can exhibit instrumentally convergent behaviour like threatening you, even though this was obviously not the designer's objective, and this behaviour occurs as a byproduct of being highly optimised to predict the next word (or an RL finetuning training objective). This is obviously not possible with, say, GPT-2, because GPT-2 does not have enough capacity or data thrown at it to do that.
[deleted] t1_j9vup6v wrote
Reply to comment by mosquitoLad in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
[deleted]
Hyper1on t1_j9vudgi wrote
Reply to comment by ArnoF7 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Just wanted to point out that even if we restrict ourselves purely to an agent that can only interact with the world through the internet, code, and natural language, that does not address the core AI alignment arguments of instrumental convergence etc being dangerous.
mosquitoLad OP t1_j9vtp3b wrote
Reply to comment by [deleted] in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
Thanks. It's not so big as a seminar. I'm in a public speaking course where each primary speech falls into a certain criteria, this one being Educational. I'm a senior CS major, the majority are freshman non-CS, so I've to make sure whatever I say is both accurate and explained in simpler terms (less 3Blue1Brown, more Code Bullet I guess).
[deleted] t1_j9vse6h wrote
Reply to comment by mosquitoLad in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
[deleted]
mosquitoLad OP t1_j9vrjsb wrote
Reply to comment by [deleted] in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
Completely fair
[deleted] t1_j9vqstv wrote
Reply to comment by mosquitoLad in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
[deleted]
Additional-Escape498 t1_j9vqmlh wrote
Reply to comment by osedao in [D] Is validation set necessary for non-neural network models, too? by osedao
For a small dataset still use cross validation, but use k-fold cross validation so you don’t divide the dataset into 3, just into 2 and then the k-fold subdivides the training set. Sklearn has a class for this already built to make this simple. Since you have a small dataset and are using fairly simple models I’d suggest setting k >= 10.
mosquitoLad OP t1_j9vq86y wrote
Reply to comment by filipposML in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
Nash Equilibrium is a new term for me; and you are right, that seems like a logical end state. I do not understand what a gradient is in this context; would this terminology apply when information is being processed by a series of agents, each having a direct influence on the quality of the output of other agents?
AquaBadger t1_j9vpso1 wrote
Reply to comment by [deleted] in [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
What does the job description say? What's the title.
maxToTheJ t1_j9vpq25 wrote
Reply to comment by Appropriate_Ant_4629 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>The subcontractors for that autonomous F-16 fighter from the news last month are not underpaid, nor are the Palantir guys making the software used to target who autonomous drones hit, nor are the ML models guiding real-estate investment corporations that bought a quarter of all homes this year.
You are equivocating the profits of the corporations vs the wages of the workers. Also you are equivocating "Investment Banking" with "Retail Banking" the person making lending models isnt getting the same TC as someone at Two Sigma.
None of those places (retail banking, defense) you list are the highest comp employers. The may be massively profitable but that doesnt necessarily translates to wages.
mk22c4 t1_j9vpea6 wrote
Reply to [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
If you pass the interview, you would be no worse than other engineers who set the hiring bar.
VirtualHat t1_j9vnc3y wrote
Reply to comment by icedrift in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yes, it's worse than this too. We usually associate well-written text with accurate information. That's because, generally speaking, most people who write well are highly educated and have been taught to be critical of their own writing.
Text generated by large language models is atypical in that it's written like an expert but is not critical of its own ideas. We now have an unlimited amount of well-written, poor-quality information, and this is going to cause real problems.
SleekEagle t1_j9vl7r3 wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I don't think anyone believes it will be LLMs that undergo an intelligence explosion, but they could certainly be a piece of the puzzle. Look at how much progress has been made in just the past 10 years alone - imo it's not unreasonable to think that the alignment problem will be a serious concern in the next 30 years or so.
In the short term, though, I agree that people doing bad things with AI is much more likely than an intelligence explosion.
Whatever anyone's opinion, I think the fact that the opinions of very smart and knowledgeable people run the gamut is a testament to the fact that we need to dedicate serious resources into ethical AI beyond the disclaimers at the end of every paper that models may contain biases.
sticky_symbols t1_j9w9b6e wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
There's obviously intelligence under some definitions. It meets a weak definition of AGI since it reasons about a lot of things almost as well as the average human.
And yes, I know how it works and what its limitations are. It's not that useful yet, but discounting it entirely is as silly as thinking it's the AGI we're looking for.