Recent comments in /f/MachineLearning

TobusFire OP t1_janzia9 wrote

Agreed. That being said, I think the prior is that you still need to have enough understanding of the state space to be able to design good mutations, cross-over, and fitness. This can easily add a lot of overhead. In contrast, I think that other cool methods like swarm optimization and ant colony optimization are also promising and in some ways simpler.

1

keepthepace t1_janzb1v wrote

Congratulations! The world desperately needs what you are doing! Was thinking about joining a while ago but got distracted by image-oriented research.

> As access to LLMs has increased, our research has shifted to focus more on interpretability, alignment, ethics, and evaluation of AIs.

Does this mean EleutherAI is not working anymore on big language models?

39

TobusFire OP t1_janyxnp wrote

> isn't RL agent-competitions approaches (i.e. simulating games between agents with different parameter values and iterating on this agents) a form of genetic algorithms?

Hmm, I hadn't thought about RL like that. I guess the signal from a reward function based on competition could be considered "fitness", and then perhaps some form of cross-over is done in the way we iterate on and update the agents. Interesting thought.

1

crappleIcrap t1_janyjbj wrote

from the abstract "Rather than attempting to extract meaning from the many complex and abstract definitions of animal sentience, we searched over two decades of scientific literature using a peer-reviewed list of 174 keywords."

how is this evidence that the definition of sentience is perfectly well defined and not at all abstract? you accuse him of not reading it, but did you?

it is a philosophical argument, not a scientific or mathematical one.

you simply hold the philosophy that due to the qualia of life argument, sentience cannot be an emergent property. I and many others disagree.

pretending this is a mathematical or scientific argument and that the science is settled that you are right is highly disingenuous.

you may be an expert on neural networks but that is like being an expert on car manufacturing and thinking that means you will be a better racecar driver than racecar drivers.

I also work with neural networks, fully understand the mathematics behind them, but that does not mean I know anything about sentience or the prerequisites for creation of a sentient being.

many arguments used against ai being sentient could easily be applied to humans

"it is just math, it doesn't actually know what it is doing"

do you think each human neuron behaves unpredictably and each have their own sentience? as far as we can tell or know, human neurons are deterministic and therefor "just math". true, neurons do not use statistical regression. but nobody ever proved that brains are the only possible way to produce sentience, or that human brains are the most optimized way possible. that is like expecting walking to be the most efficient method of moving things.

"it doesn't actually remember things, it rereads the entire text every time/ it isn't always training"

humans store information in their brain, do you believe that every neuron and every part of the brain remembers these things or is it possible that when remembering anything one part of the brain needs to ask another part of the brain what is remembered and then process that information again?

and do you expect your brain to remember and make permanent changes to the brain every nanosecond of every day, or do you expect some things to make changes and other things not to and also expect some amount of time to be required for that to happen? so why is it so hard to accept that sentience may be possible with changes only being made every month or year or longer. this argument is essentially that it cannot be sentient unless it is as fast as a human.

are there any more "i'm a scientists therefore I must know more about philosophy than philosophers" takes that i am missing?

2

TobusFire OP t1_janxqya wrote

My thoughts too. Simulated annealing and similar strategies seem to intuitively be better is most cases where traditional gradient methods aren't applicable. I can imagine a handful of cases where genetic algorithms MIGHT be better, but even then I am not fully convinced and it just feels gimmicky.

1

lifesthateasy t1_janudsp wrote

So you want to debate my comment in sentience, so you prove this by linking a wiki article about consciousness?

Ah, I see you haven't gotten past the abstract. Let me point you to some of the more interesting points: "Despite being subject to debate, descriptions of animal sentience, albeit in various forms, exist throughout the scientific literature. In fact, many experiments rely upon their animal subjects being sentient. Analgesia studies for example, require animal models to feel pain, and animal models of schizophrenia are tested for a range of emotions such as fear and anxiety. Furthermore, there is a wealth of scientific studies, laws and policies which look to minimise suffering in the very animals whose sentience is so often questioned."

So your base idea of questioning sentience just because it's subjective is a paradox that can be resolved by one of two ways. Either you accept sentience and continue studying it, or you say it can't be proven and then you can throw psychology out the window, too. By your logic, you can't prove to me you exist, and if you can't even prove such a thing, why even do science at all? We don't assume pain etc. are proxies to sentience, we have a definition for sentience that we made up to describe this phenomenon we all experience. "You can't prove something that we all feel and thus made up a name for it because we can only feel it" kinda makes no sense. We even have specific criteria for it: https://www.animal-ethics.org/criteria-for-recognizing-sentience/

1

currentscurrents t1_janr9qo wrote

>"Sentience is the capacity to experience feelings and sensations". Scientists use this to study sentience in animals for example (not in rocks, because THEY HAVE NONE).

How do you know whether or not something experiences feelings and sensations? These are internal experiences. I can build a neural network that reacts to damage as if it is in pain, and with today's technology it could be extremely convincing. Or a locked-in human might experience sensations, even though we wouldn't be able to tell from the outside.

Your metastudy backs me up. Nobody's actually studying animal sentience (because it is impossible to study); all the studies are about proxies like pain response or intelligence and they simply assume these are indicators of sentience.

>What we found surprised us; very little is actually being explored. A lot of these traits and emotions are in fact already being accepted and utilised in the scientific literature. Indeed, 99.34% of the studies we recorded assumed these sentience related keywords in a number of species.

Here's some reading for you:

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem

People much much smarter than either of us have been flinging themselves at this problem for a very long time with no progress, or even no ideas of how progress might be made.

2

lifesthateasy t1_janoimg wrote

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4494450/

Here's a metastudy to catch you up with animal sentience. Sentience has requirements, none of which a rock fits.

No, it's not. That's like saying you don't understand why 1+1=2 because you don't know how the electronic controllers work in your calculator. Look I can come up with unrelated and unfitting metaphors. Explainable AI is a field of itself, just look at below example about CNN feature maps.

We absolutely can understand what each layer detects and how it comes together if we actually start looking. For example, slide 19 shows an example about such feature maps: https://www.inf.ufpr.br/todt/IAaplicada/CNN_Presentation.pdf

Can you please try to put in any effort into this conversation? Googling definitions is not that hard: "Sentience is the capacity to experience feelings and sensations". Scientists use this to study sentience in animals for example (not in rocks, because THEY HAVE NONE).

And yes, there's also been studies about animal intelligence, but please stop adding to the cacophony of definitions on what you want to explain an LLM has. I'm talking about sentience and sentience only.

−1

grantcas t1_jano7st wrote

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

1

lmericle t1_jannla8 wrote

The trick with genetic algorithms is you have to tune your approach very specifically to the kinds of things you're modelling. Different animals mate and evolve differently, in the analogical view.

It's not enough to just do the textbook "1D chromosome" approach. You have to design your "chromosome", as well as your "crossover" and "mutation" operators specifically to your problem. In my experience, the crossover implementation is the most important one to focus on.

2

currentscurrents t1_janlwsv wrote

Sure it's idiotic. But you can't disprove it. That's the point; everything about internal experience is shrouded in unfalsifiability.

>it's very easy to understand what each neuron does,

That's like saying you understand the brain because you know how atoms work. The world is full of emergent behavior and many things are more than the sum of their parts.

>And then again, we do have a definition for sentience

And it is?

>, and there have been studies that have proven for example in multiple animal species that they are in fact sentient

No, there have been studies to prove that animals are intelligent. Things like the mirror test do not tell you that the animal has an internal experience. A very simple computer program could recognize itself in the mirror.

If you know of any study that directly measures sentience or consciousness, please link it.

3