Recent comments in /f/MachineLearning

dataslacker t1_j7gfa6c wrote

There’s probably some resentment that google and meta could have released something similar over a year ago but chose not to because they didn’t think it would be responsible. Now the company that was founded on being “responsible” released it to the world it a way that hasn’t satisfied a lot of researchers.

5

po-handz t1_j7gemeh wrote

I'm curious how much you've interacts with the homeless? Any soup kitchens or charity events? There's maybe a 1 out of 50 chance you come across some one who's well put together, education, has a job, but is just a few bucks short each months

those aren't the people waiting in line at the shelter

1

mrpogiface t1_j7g03gj wrote

Do we actually know that chatGPT is the full 175B? With codex being 13B and still enormously powerful, and previous instruction tuned models (in the paper) being 6.7B it seems likely that they have it working on a much smaller parameter count

7

mostlyhydrogen OP t1_j7fxwyx wrote

>ScaNN interface features

Nope. Notice that the results have shape (10000, 20) instead of (20,). That is just doing a batched query i.e. "for each of these 10k input vectors, find me 20 neighbors". What I need is a joint query, i.e. "given these 10k positive examples, give me an additional 20 candidate samples".

2

supersoldierboy94 OP t1_j7fcbh3 wrote

Meta is a leader in the research community alongside Google as top contributors. The funny thing is that he started posting that graph of AI related paper contributions to show supremacy and to undermine OpenAI and DeepMind as merely consumers of research. But Meta hasnt provided any product from their research that has reached the public. When they tried, they immediately shut it down.

He also kinda blames the public perception as to why Meta cannot publish products without scrutiny pointing the thing that people are still overly criticizing Facebook/Meta for obviously great reasons in the past.

It is indeed a massive milestone maybe a bit above Stable Diffusion. I'd still argue that Github Copilot was bigger but since its mainly for devs, it didnt get the publicity that it wanted. It's a massive milestone because common folks pondered the idea of AI takeover which have shifted every one else's perspective on the domain. It's the culmination of decades of R&D that the public can interact to -- a gateway to AI and its complexities.

Common folks and the public do not really care about sophisticated algos that never see the light of day.

−1

beezlebub33 t1_j7fbjii wrote

"Henry Ford did nothing revolutionary, the engineering work in making a car isn't particularly difficult, it's just perceived that way by the public. There will be a half dozen other car manufacturers in 6 months."

LeCun is going too far the opposite way. I would not be surprised if he has access to systems at FAIR that could do something similar, so dismisses the whole thing or misses the main point. But, like Ford, what OpenAI has done with Dalle2 and ChatGPT is make AI useable and available to us benighted common folk.

It doesn't matter whether Google and Meta not releasing something like this is due to a can't or a won't. It's all the same to the rest of humanity who can't use it in either case.

13

bacon_boat t1_j7fbjcg wrote

I think his view reflects his disappointment as a researcher that it's not novel ideas and algorithms that lead to success. It's scale + engineering.

But anyone with a broader view sees that ChatGPT represents a massive milestone for AI.
Who really cares how novel the algorithms are, openAI built a killer product, and deserve the recognition.

Lecun is maybe also salty because Deepmind / OpenAI are perceived as leaders, and Meta isn't.

−4