Recent comments in /f/MachineLearning

farmingvillein t1_jajtmly wrote

> Plus if it's a price war... with Google.. that would be stupid

If it is a price war strategy...my guess is that they're not worried about Google.

Or, put another way, if it is Google versus OpenAI, openai is pretty happy about the resulting duopoly. Crushing everyone else in the womb, though, would be valuable.

11

rpnewc t1_jajt66i wrote

Clearly it's computation of some form that's going on in our brain too. So sentience need to be better defined on where it would fall on the spectrum, with a simple calculator on one end and human brain on the other. My personal take is, it is much farther close to human brain than LLMs. Even if we build a perfectly reasoning machine which solves generic problems like humans do, I still wouldn't consider it human-like until it raises purely irrational emotions like, "why am I not getting any girl friends, or what's wrong with me?" There is no reason for anyone to build that into any machine. Most of the humanness lies in the non-brilliant part of the brain.

2

currentscurrents t1_jajpjj7 wrote

It's not dead, but gradient-based optimization is more popular right now because it works so well for neural networks.

But you can't always use gradient descent. Backprop requires access to the inner workings of the function, and requires that it be smoothly differentiable. Even if you can use it, it may not find a good solution if your loss landscape has a lot of bad local minima.

Evolution is widely used in combinatorial optimization problems, where you're trying to determine the best order of a fixed number of elements.

69

pnkdjanh t1_jajp9k8 wrote

I believe genetic algorithms would find its uses in optimisation of emergence behaviour - in biological analogy, if NN is akin to evolving a brain, then GA would be like evolving a colony / society.

−2

sugar_scoot t1_jajjyiq wrote

I'm not an expert but I believe the use case is if you're in an environment where you have no gradient to learn from, or even, without the hope of approximating a gradient to learn from.

97

RathSauce t1_jajjwtu wrote

I'll say up top, there is no manner to answer anything you have put forth in regards to consciousness until there is a definition for consciousness. So, apologies if you find these answers wanting or unsatisfying, but until there is a testable and consistent definition of consciousness, there is no way to improve them.

> isn't it possible the AIs we end up creating may have a much different, "unnatural" type of consciousness?

Sure, but we aren't discussing the future or AGI, we are discussing LLMs. My comment has nothing to do with AGI but yes, that is a possibility in the future.

> How do we know there isn't a "burst" of consciousness whenever ChatGPT (or its more advanced future offspring) answers a question?

Because that isn't how feed-forward, deep neural networks function regardless of the base operation (transformer, convolution, recurrent cell, etc.). We are optimizing parameters following statistical methods that produce outputs - outputs that are designed to closely match the ground truth. ChatGPT is, broadly, trained to align well with a human; the fact that it sounds like a human shouldn't be surprising nor convince anyone of consciousness.

Addressing a "burst of consciousness", why has this conversation never extended to other large neural networks in other domains? There are plenty of advanced types of deep neural networks for many problems - take ViT's for image segmentation. ViT models can be over a billion parameters, and yet, not a single person has once ever proposed ViT's are conscious. So, why is this? Likely, because it is harder to anthropomorphize the end problem of a ViT (a segmented image) than it is to anthropomorphize the output of a chatbot (a string of characters). If someone is convinced that ChatGPT is conscious, that is their prerogative but they should also consider all neural network of a certain capacity as conscious to be self-consistent with that thought.

> Even if we make AIs that closely imitate the human brain in silicon and can imagine, perceive, plan, dream, etc, theoretically we could just pause their state similarly to how ChatGPT pauses when not responding to a query. It's analogous to putting someone under anesthesia.

Even under anesthesia, all animals produce meaningful neural signals. ChatGPT is not analogous to putting a human under anesthesia.

2

VertexMachine t1_jajjq8b wrote

Yea, but one thing is not adding up. It's not like I can go to a competitor and get access to similar level of quality API.

Plus if it's a price war... with Google.. that would be stupid. Even with Microsoft's money, Alphabet Inc is not someone you want to go to war on undercutting prices.

Also they updated their polices on using users data, so the data gathering argument doesn't seem valid as well (if you trust them)


Edit: ah, btw. I don't say that there is no ulterior motive here. I don't really trust "Open"AI since the "GPT2-is-to-dangerous-to-release" bs (and corporate restructuring). Just that I don't think is that simple.

13

JackBlemming t1_jajg4dz wrote

It's not about the price, it's about the strategy. Google maps API was dirt cheap so nobody competed, then they cranked up prices 1400% once they had years of advantage and market lock in. That's not ok.

If OpenAI keeps prices stable, nobody will complain, but this is likely a market capturing play. They even said they were losing money on every request, but maybe that's not true anymore.

18

LetterRip t1_jajezib wrote

June 11, 2020 is the date of the GPT-3 API was introduced. No int4 support and the Ampere architecture with int8 support had only been introduced weeks prior. So the pricing was set based on float16 architecture.

Memory efficient attention is from a few months ago.

ChatGPT was just introduced a few months ago.

The question was 'how OpenAI' could be making a profit, if they were making a profit on GPT-3 2020 pricing; then they should be making 90% more profit per token on the new pricing.

52