Recent comments in /f/MachineLearning
bjergerk1ng t1_jakszgr wrote
Reply to comment by LetterRip in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Is it possible that they also switched from non-chinchilla-optimal davinci to chinchilla-optimal chatgpt? That is at least 4x smaller
xGovernor t1_jaksopw wrote
Reply to comment by harharveryfunny in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Oh boy what I got away with. I have been using hundreds of thousands of tokens, augmenting parameters and only ever spent 20 bucks. I feel pretty lucky.
xGovernor t1_jaksctz wrote
Reply to [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
I've been tinkering with DaVinci but even with turbo/premium using gpt3.5turbo api requires a credit card added to the account. Excited to fool with it, however I typically use 2048-4000 tokens on DaVinci 3.
kduyehj t1_jakrp6e wrote
Reply to comment by 7366241494 in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Speak for yourself.
[deleted] t1_jakrjqe wrote
[removed]
Red-Portal t1_jakr3yf wrote
Reply to [D] Are Genetic Algorithms Dead? by TobusFire
The fundamental problem with evolutionary strategies is that they are a freakin nightmare to evaluate. It's basically impossible to reason about their mathematical properties, experiments are noisy as hell, and how representative are the benchmark objective functions anyway? It's just really hard to do good science with those, which means it's hard to make concrete improvement. Sure, once upon a time they were the only choice for noisy, gradient free global optimization problems. But now we have Bayesian optimization.
GrumpyMcGillicuddy t1_jakqy81 wrote
Reply to comment by caedin8 in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Uhhhh
MonstarGaming t1_jakqs01 wrote
Reply to [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
>I have no idea how OpenAI can make money on this.
Personally, I don't think they can. What is the main use case for chat bots? How many people are going to pay $20/month to talk to a chatbot? I mean, chatbots aren't exactly new... anybody who wanted to chat with one before ChatGPT could have and yet there wasn't an industry for it. Couple that with it not being possible to know whether its answers are fact or fiction and I just don't see the major value proposition.
I'm not overly concerned one way or another, I just don't think the business case is very strong.
TrueBirch t1_jakosce wrote
Reply to comment by astrange in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
I worked in adtech. It's often true.
[deleted] t1_jako73i wrote
Reply to comment by badabummbadabing in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
[removed]
Vogelfrei88 t1_jaknwad wrote
Reply to [D] Podcasts about ML research? by Tight-Vacation-9410
Try the SuperDataScience podcast with Jon Krohn. It's 90% ML. The host is quite a good explainer and he attracts fine guests (especially from industry).
Deep_Sync t1_jakn00h wrote
Reply to [D] Are Genetic Algorithms Dead? by TobusFire
One of my friend use genetic algorithms to do acoustic material research.
big_ol_tender t1_jakmlmc wrote
Reply to comment by caedin8 in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
-totally not chatgpt
csinva t1_jakmagd wrote
Reply to [D] Are Genetic Algorithms Dead? by TobusFire
I think genetic algorithms may have a new role to play in problems involving inference / text generation / prompting with language models, even if they aren't used to train the models themselves.
For example, in our recent work on natural-language prompting, we use a genetic algorithm to generate prompts that are semantically coherent -- the genetic algorithm lets us make use of suggestions by a language model, for which gradients would be hard to obtain.
thevillagersid t1_jaklwxa wrote
But does Bing believe in the sentience of Blake Lemoine...?
_rjx t1_jaklqay wrote
Reply to [D] Podcasts about ML research? by Tight-Vacation-9410
Machine learning street talk is great.
bbateman2011 t1_jakl8m8 wrote
Reply to [D] Are Genetic Algorithms Dead? by TobusFire
I use GA optimization for non-convex problems, mainly hyperparameter optimization. Sometimes it’s very effective but I’ve not found a way to know ahead of time if it will outperform other algorithms.
Dangerous_Jelly8039 t1_jakkd7q wrote
Reply to comment by Username912773 in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
It mimics the function of part of our brain. It works like a language part of a dead brain. No consciousness.
Coming up with the statistically most probable next word is oversimplified. That is the training objective. The real process going on still needs to be investigated. The evolution of humans can also be viewed as maximizing our offspring . That does not mean humans are simple self-replicate meat balls.
Kitchen_Tower2800 t1_jakjrxr wrote
Reply to [D] Are Genetic Algorithms Dead? by TobusFire
I've never directly worked with either, but isn't RL agent-competitions approaches (i.e. simulating games between agents with different parameter values and iterating on this agents) a form of genetic algorithms?
It's also worth noting that this is exactly the type of problem that genetic algorithms were made for: no gradients, highly multimodal.
_simple_machine_ t1_jakjnnf wrote
Reply to comment by themrzmaster in [D] Are Genetic Algorithms Dead? by TobusFire
This is really interesting. Do you know of any approaches similar to this for learning hyperparameters layer structure, or complications such as inception networks or resnet?
lucidraisin t1_jakdtf7 wrote
Reply to comment by Thunderbird120 in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
yes
edit: it was also used to train Llama. there is no reason not to use it at this point, for both training and fine-tuning / inference
protonpusher t1_jakcdc6 wrote
Reply to [D] Are Genetic Algorithms Dead? by TobusFire
No. See this recent pub in Nature.
caedin8 t1_jakcasg wrote
Reply to [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
It's exciting to see that ChatGPT's cost is 1/10th that of GPT-3 API, which is a huge advantage for developers who are looking for high-quality language models at an affordable price. OpenAI's commitment to providing top-notch AI tools while keeping costs low is commendable and will undoubtedly attract more developers to the platform. It's clear that ChatGPT is a superior option for developers, and OpenAI's dedication to innovation and affordability is sure to make it a top choice for many in the AI community.
Thunderbird120 t1_jakbyew wrote
Reply to comment by lucidraisin in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
You're better qualified to know than nearly anyone who posts here, but is flash attention really all that's necessary to make that feasible?
discord-ian t1_jakt2gl wrote
Reply to [D] Are Genetic Algorithms Dead? by TobusFire
I still see papers written on them occasionally. I have always wanted to implement one, but I've never had a use case. I think there are certain categories of problems where they excel, but in the real world, most of the time, there seems to be a better approach.
One real-world use case I saw was using genetic algorithms to design an automobile brake rotor to reduce heat (or increase heat dissipation). From what I remember of the presentation... Basically, they had a very large number of mathematical definable designs with many input variables. The interactions between these different variables were not necessarily clear. Elements of one of these designs might combine well with elements from a totally separate design. And the model to test them was computationally expensive.
They were able to use this genetic algorithm to design a rotor that, at least on the computer, was meaningfully better than their companies (and likely the industry's) state of the art.