Recent comments in /f/MachineLearning
[deleted] t1_japa07x wrote
Reply to comment by harharveryfunny in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
[removed]
[deleted] t1_jap9wft wrote
Reply to comment by Im2bored17 in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
[removed]
[deleted] t1_jap9jyg wrote
Reply to comment by [deleted] in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
[removed]
Hostilis_ t1_jap97r5 wrote
Reply to comment by SpookyTardigrade in [D] Are Genetic Algorithms Dead? by TobusFire
https://www.nature.com/articles/s41467-021-26568-2
Try this article
[deleted] t1_jap8ttt wrote
Reply to comment by MonstarGaming in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
[removed]
MonstarGaming t1_jap8605 wrote
Reply to comment by fasttosmile in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
That seems to be the gist of this entire thread. This is the first API most of /r/machinelearning have heard of so it must be best on the market. /s
To your point, there are companies who have been developing speech-to-text for decades. The capability is so unremarkable that most (all?) cloud providers have a speech-to-text offering already and it easily integrates with their other services.
I know this is a hot take, but I don't think OpenAI has a business strategy. They're deploying expensive models that directly compete with entrenched, big tech companies. They can't be thinking they're going to take market share away from GCP, AWS, Azure with technologies that all three offer already, right? Right???
BitterAd9531 t1_jap830m wrote
[deleted] t1_jap5zfu wrote
Reply to [D] offline speech to text - trainable by AlexSpace3
[removed]
MonstarGaming t1_jap3jzc wrote
Reply to comment by Smallpaul in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
>I guess you haven’t visited any B2C websites in the last 5 years.
I have and that is exactly my point. The main use case is B2C websites, NOT individuals, and there are already very mature products in that space. OpenAI needs to develop a lot of bells, whistles, and integration points with existing technologies (salesforce, service now, etc.) before they can be competitive in that market.
>can translate between human languages
Very valuable, but Google and Microsoft both offer this for free.
>between computer languages
This is niche, but it does seem like an untapped, albeit small, market.
>can compose marketing
Also niche. That being said, would it save time? Marketing materials are highly curated.
>summarise text...
Is this a problem a regular person would pay to have fixed? The maximum input size is 2048 tokens / ~1,500 words / three pages. Assuming an average person pastes in the maximum input, they're summarizing material that would take them 6 minutes to read (Google is saying the average person reads 250 words per minutes). Mind you it isn't saving 6 minutes, they still need to read all of the content ChatGPT produces. Wouldn't the average person just skim the document if they wanted to save time?
To your point, it is clearly a capable technology, but that wasn't my argument. There have been troves of capable technologies that were ultimately unprofitable. While I believe it can be successful in the B2C market, I don't think the value proposition is nearly as strong for individuals.
Anyhow, only time will tell.
EricHallahan t1_jap0tic wrote
Reply to comment by keepthepace in [N] EleutherAI has formed a non-profit by StellaAthena
To clarify: EleutherAI will continue to work with large language models and train its own when there is a clear research case as it always has—there just happens to be a much larger saturation of suitable models today for the research we would like to conduct than what existed even twelve months ago, and there is no reason to reinvent something when something suitable already exists. Expect new models to be designed and trained to specifically meet certain research requirements, rather than more versatile usage.
[deleted] OP t1_jap0c5n wrote
Reply to comment by stokesmrq in [P] InventBot - Invent Original Ideas with Keywords by [deleted]
[removed]
stokesmrq t1_jap05wt wrote
why dont i just prompt chatgpt directly?
avialex t1_jap04wq wrote
Reply to comment by filipposML in [D] Are Genetic Algorithms Dead? by TobusFire
I was kinda excited, I had hoped to find an evolutionary algorithm to find things in a latent space, I've been having a hell of a time trying to optimize text encodings for diffusion models.
diamond__hands t1_jap03ok wrote
Reply to comment by zazzersmel in [D] Is there an ML project out there that recommends movies based on more than the usual features? by of_a_varsity_athlete
imdb pro almost certainly has that stuff. behind lock and key...
SnooMarzipans1345 t1_jaoz6ta wrote
Reply to comment by mikonvergence in [P] A minimal framework for image diffusion (including high-resolution) by mikonvergence
Thank YOU SO FAR!! :D `*smile*`
SnooMarzipans1345 t1_jaoz34f wrote
Reply to comment by mikonvergence in [P] A minimal framework for image diffusion (including high-resolution) by mikonvergence
> However, the video course material is quite short (about 1,5 hrs) so you can just play it and see if it works for you or not!
However, the video course material is quite short (about 1,5 hrs) so you can just play it and see if it works for you or not!
WHat? did I miss a sign or something? please help.
SnooMarzipans1345 t1_jaoz02v wrote
Reply to comment by mikonvergence in [P] A minimal framework for image diffusion (including high-resolution) by mikonvergence
>so I would advise catching up on topics like VAEs or GANs.
What??? dig** dig** dig*** clunk** what is this? its in a foreign lanauge to me.
currentscurrents t1_jaoy1e9 wrote
Reply to comment by [deleted] in [P] InventBot - Invent Original Ideas with Keywords by [deleted]
No thanks.
[deleted] OP t1_jaoxq9q wrote
Reply to comment by currentscurrents in [P] InventBot - Invent Original Ideas with Keywords by [deleted]
[removed]
currentscurrents t1_jaoxn3e wrote
Lol, people are trying to sell ChatGPT prompts?
crappleIcrap t1_jaou73g wrote
Reply to comment by lifesthateasy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
>I like how you criticize me for not providing scientific evidence for my reasoning,
I criticized you for quite the opposite reason- for claiming sentience to be something settled by science or mathematics when it is still firmly in the realm of philosophy.
>they argue it emerges from the specific properties of our neural architecture, which is vastly different than that of neural networks'
They never argue that it ONLY emerges from the specific properties of our neural architecture, or at least, I have never seen a good paper claiming that.
>Once it's trained, it stays the same. The only things that temporarily change are in the memory module of the feedback systems, and that only serves the purpose of being able to hold conversation.
Gpt3 is the third round of training and openI will, no doubt, will use our data to train a fourth, but even barring that, it is a bit similar to saying "but humans aren't even immortal, they die and just have kids that have to learn everything over again". Also, after 25 your brain largely stops changing and is fairly "set" other than new memories forming, so I fail to see how 1 thread is much different from 1 human. But this is a stupid argument because if I made the change to allow training on every input, the model wouldn't be any better and would actually be an easy (if less efficient) change to make. So if that was the only problem, I would immediately download gpt-neo and make the change and collect my millions.
Like I said, current implementations are not likely in my opinion to be sentient and this is a major reason- that most threads do not last very long, but there is no reason a single thread if let continue indefinitely could not be sentient as it has a memory that is not functionally very different than with human memory other than being farther away physically, or even that a short lived thread does not have a simple short lived sentience.
As far as determinism goes, the only way within the currently known laws of physics for the human brain to be non-determimistic is for it to use some quantum effect and the only other option is randomness, so claiming that it needs to be non-deterministic to be sentient is saying it needs true randomness added in, which I think is a weird argument despite being popular amongst the uninformed and the complete lack of evidence that the human brain uses quantum effects or is non-deterministic.
Also I cannot recommend Gödel Escher Bach enough, it makes a much stronger case than I could ever, and it is an amazing read.
>artificial neurons in neural networks don't have a continuously changing impulse pattern,
Not sure exactly what you are saying here, but it sounds pretty similar to RNNs, which are pretty old-news as Transformers seem to work much better at solving the issues this inability usually presents.
starlistener t1_jaoqzu8 wrote
Reply to comment by StellaAthena in [N] EleutherAI has formed a non-profit by StellaAthena
Will do! Thank you kindly!
mmmniple t1_jaopv6r wrote
Reply to comment by filipposML in [D] Are Genetic Algorithms Dead? by TobusFire
Thanks
filipposML t1_jaopq43 wrote
Reply to comment by mmmniple in [D] Are Genetic Algorithms Dead? by TobusFire
The latest version is here: https://2022.ecmlpkdd.org/wp-content/uploads/2022/09/sub_1229.pdf
[deleted] t1_japabmm wrote
Reply to comment by jturp-sc in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
[removed]