Recent comments in /f/MachineLearning

ShowerVagina t1_jamiqb4 wrote

> I had an exhausting number of conversations with confused product managers, engineers and marketing managers on “No, we’re not using ChatGPT”.

They use your conversations for further training which means if you use it to help you with proprietary code or documentation, you're effectively disclosing that.

1

TobusFire OP t1_jamcrd2 wrote

This is a reasonable question but I believe you are misunderstanding. The randomization of parameters in a neural network (I assume you are talking about initialization?) is certainly not the same as a mutation in a GA. Mutation occurs randomly, sure, but is selected for and crossed over, whereas hill-climbing and gradient descent simply move on the gradient and do not use either random mutations or cross-over so are not genetic.

1

BigBayesian t1_jam6z7u wrote

Genetic algorithms are good, as you said, when you really understand the space and can come up with a really good candidate generation system. They’re okayish (or, the same as everything else) when you have no understanding of the space at all, and you’re just totally guessing. They can’t latch onto a curve in design space as well as things that look at a simpler gradient can. So maybe they’re best used for really complex spaces where gradient based methods don’t do well. The kind of places you’d use Gibbs sampling, or general optimization algorithms.

So, basically, they’re useful when you have good feature engineering already done, like many methods that have fallen out of vogue in the age of letting algorithms and data do your feature engineering for you. And they’re as good a shot in the dark as any when standard methods fail and you’ve got no clue how to proceed.

So, yeah, the number of times genetic algorithms are the “right” choice is pretty limited these days.

3

M_Alani t1_jam3i7i wrote

It wasn't as bad as it sounds. The fun part was that you had to understand how every little piece of the algorithm works, and the nightmare was implementing all of this with 512mb of RAM. We didn't have the luxury of trying different solutions.

9

Stakbrok t1_jam0bpq wrote

You can edit what it replied of course (and then hope it builds off of that and keeps that specific vibe going, which always works in the playground) but damn, they locked it down tight. 😅

Even when you edit the primer/setup into something crazy (you are a grumpy or deranged or whatever assistant) and change some things it said into something crazy, it overrides the custom mood you set for it and goes right back to its ever serious ChatGPT mode. Even sometimes apologizing for saying something out of character (and by that it means the thing you 'made it say' by editing, so it believes it said that)

5