Recent comments in /f/MachineLearning

drplan t1_jalud65 wrote

Genetic algorithms are still useful for strange objective functions that defy analytical approaches, such as anything based on complex simulations. But it somehow has always been this way.

Nowadays things have changed by generative models for code generation. A few years ago Genetic Programming (and it's many variants) was the only approach to do this, now some problem can just be solved by asking a language model to write the code for xyz.

2

WarAndGeese t1_jalq339 wrote

Don't let it demotivate competitors. They are making money somehow, and planning to make massive amounts more. Hence the space is ripe for tons of competition, and those other companies would also be on track to make tons of money. Hence, jump in competitors, the market is waiting for you.

−1

PassionatePossum t1_jalnb1q wrote

I think, my professor summarized it very well: "Genetic algorithms is what you do when everything else fails."

What he meant by that is, that they are very inefficient optimizers. You need to evaluate lots and lots of configurations because you are stepping around more of less blindly in the parameter space and you are only relying on luck and a few heuristics to improve your fitness. But their advantage is that they will always work as long as you can define some sort of fitness function.

If you can get a gradient, you are immediately more efficient because you already know in which direction you need to step to get a better solution.

But of course there is room for all algorithms. Even when you can do gradient descent, there are problems where it quickly gets stuck in a local optimum. There are approaches how to "restart" the algorithm to find a better local optimum. I'm not that familiar with that kind of optimization but it is not inconceivable that genetic algorithms might have a role to play in such a scenario.

24

serge_cell t1_jalnarf wrote

The notable diffrence between GA and other random searches is cross-over operator, and in it's theory "building blocks" hypothesis. Neither were confirmed during years (dozens of years) of attemted use of GA.

3

bo_peng OP t1_jalmszp wrote

It's actually quite good at Q&A if you use my prompt templates:

+gen \nExpert Questions & Helpful Answers\nAsk Research Experts\nQuestion:\nXXXXXXXXXXXXXXX?\n\nFull Answer:\n

+gen \nAsk Expert\n\nQuestion:\nXXXXXXXXXXXXXXXX?\n\nExpert Full Answer:\n

+gen \nQ & A\n\nQuestion:\nXXXXXXXXXXXXXXXXX?\n\nDetailed Expert Answer:\n

7

sobe86 t1_jalldpg wrote

Plus also there needs to be a learnable, nontrivial 'strategy' to take advantage of, otherwise it's not going to beat simulated annealing except on speed. The couple of times I've used it in practice, SA was about as good as we could get performance-wise.

17

FinancialElephant t1_jaliqsh wrote

Genetic optimization might be dead in most cases. I think a lot of the ideas aside from optimization algorithms are still relevant.

I've found GP techniques can yield parsimonious models. A lot of the big research these days is on big models, but GP seems good for small, parsimonious, and elegant models. Good for low data regimes, specialized problems, and problems where you have expert knowledge you can encode. Generally speaking I like working with GP becuase you end up with a parsimonious and interpretable model (opposite of a lot of NN research).

In practice I've found importance sampling methods to work about as good as genetic optimization for optimizing GP trees/grammars for the small amount of work I did with them. I haven't found either method to edge out by much, but it could depend on the problem.

I don't know if this is considered GP (or GA) without a genetic optimization method. However I think we can say that the notion of optimizing a symbolic tree or grammar was heavily developed within GP, even if today you may use some monte carlo optimization method in practice.

3

visarga t1_jalh1r1 wrote

You don't always need a population of neural networks, it could be a population of prompts or even a population of problem solutions.

If you're using GA to solve specific coding problems, then there is one paper where they use LLM to generate diffs for code. The LLM was the mutation operator, and they even fine-tune it iteratively.

3