Recent comments in /f/MachineLearning
Username912773 t1_jaje76x wrote
It has no initiative. It’s entire job is to come up with the statistically most probable next word. Sure, it can get good at that but so would a monkey after reading the entire internet and being trained in more or less the same way for thousands of years.
E_Snap t1_jajdzs3 wrote
Reply to comment by 7366241494 in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Lol at the how /r/technology users contort their brains to find any way they can to feel superior to machines in the most ludicrous of ways. If they’re that insecure about their place in this world, the future is gonna be real fun for them.
Crystal-Ammunition t1_jajdvh5 wrote
Reply to comment by 7366241494 in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
> And how do you know that humans are anything more than that?
I have agency.
badabummbadabing t1_jajdjmr wrote
Reply to comment by jturp-sc in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Honestly, I have become a lot more optimistic regarding the prospect of monopolies in this space.
When we were still in the phase of 'just add even more parameters', the future seemed to be headed that way. With Chinchilla scaling (and looking at results of e.g. LLaMA), things look quite a bit more optimistic. Consider that ChatGPT is reportedly much lighter than GPT3. At some point, the availability of data will be the bottleneck (which is where an early entry into the market can help getting an advantage in terms of collecting said data), whereas compute will become cheaper and cheaper.
The training costs lie in the low millions (10M was the cited number for GPT3), which is a joke compared to the startup costs of many, many industries. So while this won't be something that anyone can train, I think it's more likely that there will be a few big players (rather than a single one) going forward.
I think one big question is whether OpenAI can leverage user interaction for training purposes -- if that is the case, they can gain an advantage that will be much harder to catch up to.
lifesthateasy t1_jajcxh9 wrote
Reply to comment by red75prime in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Yeah but even that wouldn't work like our brain, the basic neurons in neural networks don't work like neurons in our brains so there's that.
Purplekeyboard t1_jajcnb5 wrote
Reply to comment by JackBlemming in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
> This is not good for the community.
When GPT-3 first came out and prices were posted, everyone complained about how expensive it was, and that it was prohibitively expensive for a lot of uses. Now it's too cheap? What is the acceptable price range?
minimaxir OP t1_jajcf4s wrote
Reply to comment by LetterRip in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
It's safe to assume that some of those techniques were already used in previous iterations of GPT-3/ChatGPT.
[deleted] t1_jajcdjw wrote
[removed]
red75prime t1_jajblsd wrote
Reply to comment by lifesthateasy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Yep, I'm waiting for recurrent models with internal monologue. Regarding them it would be harder to say that they do not think.
luckyj t1_jajaz53 wrote
Reply to comment by harharveryfunny in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
But that (sending the whole or part of the conversation history) is exactly what we had to do with text-davinci if we wanted to give it some type of memory. It's the same thing with a different format, and 10% of the price... And having tested it, it's more like chatgpt (I'm sorry, I'm a language model type of replies), which I'm not very fond of. But the price... Hard to resist. I've just ported my bot to this new model and will play with it for a few days
ninjasaid13 t1_jajamez wrote
Reply to comment by currentscurrents in [D] What are the most known architectures of Text To Image models ? by AImSamy
but in industry, don't we want things to be cheap? cost might be a bigger factor than performance.
WarProfessional3278 t1_jaj9nnt wrote
Reply to comment by harharveryfunny in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Rough estimate: with one 400w gpu and $0.14/hr electricity, you are looking at ~0.00016/sec here. That's the price for running the GPU alone, not accounting server costs etc.
I'm not sure if there are any reliable estimate on FLOPS per token inference, though I will be happy to be proven wrong :)
RathSauce t1_jaj9ml5 wrote
Reply to comment by 7366241494 in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Because we can put a human in an environment with zero external visual and auditory stimuli and one could still collect a EEG or fMRI signal that is dynamic with time and would show some level of natural evolution. That signal might be descriptive of an incredibly frightened person but all animals are capable of computation when deprived of input in the form of visual, auditory, olfactory, etc.
No LLM is capable of producing a signal lacking a very specific input; this fact does differentiate all animals from all LLM's. It is insanity to sit around and pretend we are nothing more than chatbots because there exists a statistical method that can imitate how humans type.
NoLifeGamer2 t1_jaj9i1b wrote
Reply to comment by visarga in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Gotta love getting those "Model currently busy" errors for only a single request
Stencolino OP t1_jaj8ue6 wrote
Reply to comment by kduyehj in Is there any model that classify singing and speaking? [R] by Stencolino
Thanks, ill think about it
CMUOresama t1_jaj8n4d wrote
Here's a paper that comes up with a differentiable relaxation of beam search and optimizes it directly to MT metrics as you suggest: https://arxiv.org/abs/1708.00111
currentscurrents t1_jaj8jze wrote
Reply to comment by ninjasaid13 in [D] What are the most known architectures of Text To Image models ? by AImSamy
Yup. But in neural networks, bigger is better!
Stencolino OP t1_jaj8jqw wrote
Reply to comment by keph_chacha in Is there any model that classify singing and speaking? [R] by Stencolino
I tought about it, it should be simple enough but I was just wondering if there was one already made
harharveryfunny t1_jaj8bk2 wrote
Reply to comment by Educational-Net303 in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Could you put any numbers to that ?
What are the FLOPS per token inference for a given prompt length (for a given model)?
What do those FLOPS translate to in terms of run time on Azure's GPUs (V100's ?)
What is the GPU power consumption and data center electricity costs ?
Even with these numbers can we really relate this to their $/token pricing scheme ? The pricing page mentions this 90% cost reduction being for the "gpt-3.5-turbo" model vs the earlier davinci-text-3.5 (?) one - do we even know the architectural details to get the FLOPs ?
lifesthateasy t1_jaj7uo8 wrote
Reply to comment by 7366241494 in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
There's a plethora of differences, one of them is that we can think even without someone prompting us.
7366241494 t1_jaj7cmd wrote
Reply to comment by lifesthateasy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
And how do you know that humans are anything more than that?
IMO we’re all just chatbots.
lifesthateasy t1_jaj6vlq wrote
Ugh ffs. It's a statistical model that is trained on human interactions, so of course it's gonna sound like a human and answer as if it had the same fears as a human.
It doesn't think, all it ever does is it gives you the statistically most probable correct response to your prompt, if and only if it gets a prompt.
ninjasaid13 t1_jaj678q wrote
Reply to comment by currentscurrents in [D] What are the most known architectures of Text To Image models ? by AImSamy
>T5
but isn't it much heavier?
visarga t1_jaj4lxx wrote
Reply to comment by Timdegreat in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Not this time. Still text-embedding-ada-002
bushrod t1_jajecpg wrote
Reply to comment by RathSauce in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
I agree with your point, but playing devil's advocate, isn't it possible the AIs we end up creating may have a much different, "unnatural" type of consciousness? How do we know there isn't a "burst" of consciousness whenever ChatGPT (or its more advanced future offspring) answers a question? Even if we make AIs that closely imitate the human brain in silicon and can imagine, perceive, plan, dream, etc, theoretically we could just pause their state similarly to how ChatGPT pauses when not responding to a query. It's analogous to putting someone under anaesthesia.