Recent comments in /f/MachineLearning

M_Alani t1_jakapj2 wrote

Oh brings back a lot of memories. I remember using it in the early 2000s to optimize neural networks. Back when only Matlab was there and we couldn't afford it and had to build NN from scratch.... using Visual Basic 😢

Back to your question, I don't think they're dead. Probably their use in NN is. Edit:spelling

37

topcodemangler t1_jakalh1 wrote

For me an always interesting and alluring idea was to use GA to search for a combination of elementary information processing stuff (probably Boolean gates) and memory which would result in some novel ML architecture. Maybe much more effective than NN as it would be possible to directly implement via electronics without the overhead.

5

What-Fries-Beneath t1_jak8reh wrote

If you leave philosophy and spirituality out of it there is no debate on the definition of consciousness. It isn't that complicated.

>Consciousness is an internal representation of the world which incorporates an awareness of self. It's a dynamic computation of self in the world.

https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/homing-in-on-consciousness-in-the-nervous-system-an-actionbased-synthesis/2483CA8F40A087A0A7AAABD40E0D89B2

Plenty of citations in that paper for you to explore the idea from a scientific perspective. Edit: also plenty of experiments.

0

RathSauce t1_jak781t wrote

>So, apologies if you find these answers wanting or unsatisfying, but until there is a testable and consistent definition of consciousness, there is no way to improve them.

There is the full quote, what experiment do you propose to prove that the statement you provided is the correct, and only, definition of consciousness? If this cannot be proven experimentally, it is not a definition, it is just your belief.

If the statement cannot be proven, then people need to stop stating that consciousness has arisen in a computer program. If there is no method to prove/disprove your statement in an external system, it cannot be a definition, a fact, or even a hypothesis.

2

Hostilis_ t1_jak681p wrote

>But you can't always use gradient descent. Backprop requires access to the inner workings of the function

Backprop and gradient descent are not the same thing. When you don't have access to the inner workings of the function, you can still use stochastic approximation methods for getting gradient estimates, e.g. SPSA. In fact, there are close ties between genetic algorithms and stochastic gradient estimation.

33

What-Fries-Beneath t1_jak53gl wrote

I'm not a researcher in the space, just a big fan. That there are levels of consciousness is very well evidenced. Essentially each level is a layer of dynamic awareness. One of those layers is an awareness of self, and self in the world. It's the HOW that's under investigation not so much the "what". https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/homing-in-on-consciousness-in-the-nervous-system-an-actionbased-synthesis/2483CA8F40A087A0A7AAABD40E0D89B2

People like to muddy the question with philosophy and spirituality.

0

What-Fries-Beneath t1_jak4iwk wrote

>I'll say up top, there is no manner to answer anything you have put forth in regards to consciousness until there is a definition for consciousness.

Please stop saying this. Consciousness is an internal representation of the world which incorporates an awareness of self. It's a dynamic computation of self in the world. I wish people would stop saying "we don't have a definition of consciousness". There are questions around exactly how it arises. However there are some extremely well evidenced theories. My personal favorite is Action Based Consciousness.

−1

What-Fries-Beneath t1_jak44fb wrote

>Because we can put a human in an environment with zero external visual and auditory stimuli

Do that for a few days and that human will never recover full cognitive function. https://www.google.com/books/edition/Sensory_Deprivation/1tBZauKc4GUC

Anyways completely aside from the particulars of this discussion: "Identical to humans" isn't the bar.

>No LLM is capable of producing a signal lacking a very specific input ; this fact does differentiate all animals from all LLM's.

Because we're meat-based. Our neurons kill themselves without input. They stimulate each other nearly constantly to maintain connections. Some regions generate waves of activity to maintain/strengthen/prune connections, etc. Saying that electronic systems need to evidence the same activity is like saying "Birds are alive. Bears can't fly, therefore they are dead."

Consciousness is an internal representation of the world which incorporates an awareness of self. It's a dynamic computation of self in the world. I wish people would stop saying "we don't have a definition of consciousness". There are questions around exactly how it arises. However there are some extremely well evidenced theories. My personal favorite is Action Based Consciousness.

−2

farmingvillein t1_jajw0yj wrote

> The training costs lie in the low millions (10M was the cited number for GPT3), which is a joke compared to the startup costs of many, many industries. So while this won't be something that anyone can train, I think it's more likely that there will be a few big players (rather than a single one) going forward.

Yeah, I think there are two big additional unknowns here:

  1. How hard is it to optimize inference costs? If--for sake of argument--for $100M you can drop your inference unit costs by 10x, that could end up being a very large and very hidden barrier to entry.

  2. How much will SOTA LLMs really cost to train in, say, 1-2-3 years? And how much will SOTA matter?

The current generation will, presumably, get cheaper and easier to train.

But if it turns out that, say, multimodal training at scale is critical to leveling up performance across all modes, that could jack up training costs really, really quickly--e.g., think the costs to suck down and train against a large subset of public video. Potentially layer in synthetic data from agents exploring worlds (basically, videogames...), as well.

Now, it could be that the incremental gains to, say, language are not that high--in which case the LLM (at least as these models exist right now) business probably heavily commoditizes over the next few years.

9

bigfish_in_smallpond t1_jajuuhl wrote

I think we will eventually discover that consciousness is closely tied to the brain's ability to interact on a quantum level with the real world and that maintaining the unique superposition of quantum states is what is unique. Any discrete silicon-based computer will only be an approximation of that at best.

−2