Recent comments in /f/MachineLearning

rpnewc t1_jawxrjh wrote

Yes ChatGPT does not have any idea about what trophy is, or a suitcase is or what brown is. But it has access to a lot of sentences with these words and hence some attributes of it. So when you ask these questions, sometimes (random sampling) it picks the correct noun as the answer, other times it picks the wrong one. Ask a logic puzzle with ten people as characters. See its reasoning capability.

7

ACH-S t1_jawnajq wrote

I'm not sure whether you mean genetic algorithms or evolutionary algorithms or if those terms are interchangeable for you (often, they are not). Anyway, a field that heavily relies on them is Quality-Diversity (https://quality-diversity.github.io/, there is a nice list of papers there). Also, I would recommend that you have a look at the proceedings from the GECCO conference (e.g. https://dl.acm.org/doi/proceedings/10.1145/3512290 , the conference is much smaller than neurips/ICML/etc, and the research quality tends to be a bit more variable, but you'll see that evo algortihms, and in particular genetic ones are far from being dead).

The idea that "designing an experiment for a genetic algorithm requires sufficient prior" doesn't sound correct to me, generally you turn to them when you don't have any reliable priors on the search space (as other comments have pointed out, see CMA-ES as an example. I'll add ES https://arxiv.org/abs/1703.03864 as another useful example that I've personally often used to simplify meta-learning problems).

1