Recent comments in /f/MachineLearning
PassionatePossum t1_j7f1xr8 wrote
Reply to High-speed cameras and deep learning [Research] by A15L
I'm not sure I follow the question. Why would there need to be special research for high-FPS cameras? The challenge with all video-based systems is to capture long-range dependencies. And "long-range" is defined over the number of frames. How much time has elapsed between the frames doesn't really matter.
However, if you have a high-FPS camera and a slow moving scene, you'll have a lot of images are are pretty much identical to each other. That means, according to information theory there is very little additional information in each frame. In that case you might want to consider to do temporal downsampling on your data. If you have a fast moving scene and you really need to take advantage of updating your prediction for every single frame, the only constraint is processing power.
So in that case, the problem of inference for high-FPS cameras is the same as computationally efficient models. And there are a few models who are intended to be run on mobile devices. Maybe you want to look into that.
ricafernandes t1_j7f1t9v wrote
Reply to High-speed cameras and deep learning [Research] by A15L
The thing is probably processing time. Not many good image models or applications can run 250 times a second in order to process each of these frames, as they usually have longer processing time than some ms
[deleted] t1_j7f1fx6 wrote
[removed]
supersoldierboy94 OP t1_j7f1die wrote
Reply to comment by OneMillionSnakes in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
> some variant of it Just the other day, some researchers already released BioGPT which is trained on biomedical text. It's particularly good. Sitll needs some time to test its accuracy against real medical professionals
I'd respectfully disagree on the usage. While it has been shown to generate weird sequences, with the right usage, you can guide it to create particularly effective articles and stories. It's summarization tool is also good. Grammar is particularly good as well.
> What chatGPT represents to him
It can be true and petty at the same time. When asked, he will revert to complaining why Galactica was shut down blaming the people using it and pointing as to why ChatGPT does more mistakes but is still standing. Why would someone also suddenly post a paper contribution chart saying that others just 'consume' the research?
alkibijad OP t1_j7f1b2e wrote
Reply to comment by vade in [D] Apple's ane-transformers - experiences? by alkibijad
Looking forward to hearing their experiences!
OneMillionSnakes t1_j7f0auj wrote
I agree with most of those statements. I don't think he's being petty he's just being honest about what ChatGPT represents to him.
Now I am biased as on a personal level I'm kinda sick of ChatGPT. It's good at carrying on a brief chat and it's very well polished. But it's quite mundane and people are already talking about using it or some variant to make marketing and web pages in a web that's already full of AI generated articles and targeted ads. It should be used perhaps for chats when trained on a corpus including some support docs or something. Not much more than that.
I do think there could be some negative ramifications in the worst case. I have a friend whose a graphic designer at a major company whose been told by her employers this the future of ads. Higher ups say stuff like this all the time and it doesn't wind up coming true so it hopefully won't become a real problem. Still it's a bit concerning that people on the oustside of these fields are perhaps overvaluing ChatGPT so much.
red-necked_crake t1_j7exuvy wrote
Reply to comment by redlow0992 in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
not to mention being a company that is willing to put out huge ass models AND training logs which is infinitely more useful to our community than three vague blogposts and 1000 retweets by ex web3 grifters on twitter claiming GPT-4 will quite literally have 100 trillion parameters and worshipping Sam Altman as God LOL.
People keep claiming that others dismiss engineering effort that went into ChatGPT, GPT3, and turn a blind eye to relative opaqueness on techniques and tricks that went into making these models happen (not even a dataset available). Other than showing a proof of concept (which is SIGNIFICANT but not sufficient for SCIENCE), how exactly do we, as a community of ML, benefit from OpenAI getting all the hype and Satya's money? (Whisper is a weird counterpoint to my arguments though.)
Cheap_Meeting t1_j7exknk wrote
English due to data availability.
ai_master_central t1_j7euxkp wrote
what we need a completly new language designed to the bridge between human and machine, that will be ideal , maybe we can train a multi-language model to create a perfect human language .
ok531441 t1_j7eu88c wrote
Reply to comment by du_dt in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
Galactica was doomed to fail because it was specifically marketed as a science tool which puts very high expectations on factual and mathematical correctness. ChatGPT on the other hand is marketed as chat.
supersoldierboy94 OP t1_j7esbbj wrote
Reply to comment by du_dt in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
Fair point. But why is he blaming the people instead of his whole company going as far as "it's just people destroying Meta's reputation"?
I have high respects for him as a researcher, and in fact I've read his books and papers. He's great when he speaks as a researcher. It's different when he's speaks as a Meta employee vested with the companies interest. That's why I take his Meta-driven statements for/against companies with a grain of salt.
I wont be even surprised if the big tech companies are behind the Stable Diffusion/Midjourney lawsuit since it would do them good. Considering the fact that Meta partnered with Shutterstock to produce their own.
supersoldierboy94 OP t1_j7ervit wrote
Reply to comment by rafgro in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
Exactly. He was blaming the users for the Galactica debacle and wondering why OpenAI's ChatGPT is getting adoption when "it spews the same bS" as per his words. And also proceeds to tell that it is just because people had been drstroying Meta's reputation overall.
rafgro t1_j7eq9ek wrote
Nah, it's not engineering vs science or OS vs closed. It's much simpler:
>FAIR's Galactica. People crucified it because it could generate nonsense. ChatGPT does the same thing.
YLC threw a fit over the whole Galactica debacle. He had lovely aggressive tweets such as "Galactica demo is off line for now. It’s no longer possible to have some fun by casually misusing it. Happy?" or describing people who disliked Galactica as "easily scared of new technology". To see the success of ChatGPT just a few weeks later must have been really painful.
du_dt t1_j7eohva wrote
MetaAI released their galactica chatbot a month before chatgpt, but it was heavily criticized for “dangerous AI generated pseudoscience nonsense” and shutdown a few days lter. Now OpenAI does the same and everyone praises them - well, I get why Yann is being saulty about it.
supersoldierboy94 OP t1_j7enajd wrote
Reply to comment by danjlwex in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
> research isnt something that happens at startups
Entirely depends on the startup and the product. R&D happens on many startups. Unless someone has a limited exposure on AI and ML-oriented startups, this is far from truth. OpenAI is an applied research company. They produce research papers and puts it into production. In the electronics department, OnePlus has risen as a great R&D startup capable of producing rapid R&D-based products. Grammarly puts a ton of money on its R&D to create a more domain-specific GPT model because it is vital to their product.
> The divide you describe
One does not need to probe deeper into this. Ask an experienced Data Engineer, a Data Scientist, and a DevOps. There is a clear DISTINCTION of what they do and how they balance each other. The divide isnt hostile. It's more of "we want this, you cant have all of this type of relationship, besides the usual difference of who works with what.
danjlwex t1_j7empve wrote
Reply to comment by supersoldierboy94 in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
In my 35 years of working with both engineers, corporate researchers and academics, I have not experienced this divide you describe. Research isn't something that happens at startups. There is no revenue to support research in a startup. The entire focus is on product.
supersoldierboy94 OP t1_j7em0v4 wrote
Reply to comment by whiskey_bud in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
That's not an ad hominem. An ad hominem attacks the subject as basis of its argument. Telling that this person is X based on Y is not ad hominem. It's a conclusion of the quotes I laid down.
redlow0992 t1_j7elzrw wrote
Reply to comment by CKtalon in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
Are we only talking in the context of LLMs and language? If not, your statement is simply incorrect. In past two years FAIR published a number of high-quality self-superviser learning frameworks that come with open source implementations. On top of my head, MoCo (and its versions), Barlow Twins, VicReg, Swav all came from FAIR. They are the one that showed that SSL for computer vision does not need to be contrastive only. Some of these papers have some 5K citations in the span of 3 years and are used by many researchers on a daily basis.
But yeah, tell me how they are chasing corporate KPIs and are publishing junk.
supersoldierboy94 OP t1_j7elvss wrote
Reply to comment by danjlwex in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
The beef does not exist. But the divide between research and engineering exist. It's one of the fundamental reasons why some startups fail -- they dont know how to balance which and do not know how to construct a team. There's a "divide" between data science and data engineering and folks who work on that know that there is.
supersoldierboy94 OP t1_j7elmu3 wrote
Reply to comment by MrTacobeans in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
It's not bad. That's the entire point of the post.
MrTacobeans t1_j7ekqqr wrote
Reply to comment by supersoldierboy94 in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
But why is that bad? If the researchers wanted moola they should have made a business or published/ran the models they created from their own research. If you don't want to get stepped on by someone else talented enough to piece it together don't release your ideas.
Don't get butt hurt when a primarily publicity or capitalist based company implements your idea and makes it into a product.
whiskey_bud t1_j7ekfua wrote
Reply to comment by supersoldierboy94 in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
>Please point out the 'ad hominem' against him
I literally quoted it.
Rohit901 t1_j7ek7ix wrote
Lol I kinda agree with you here, and Lecun reminds me of Sheldon from Big Bang theory who is constantly berating and insulting engineers (Howard)
danjlwex t1_j7ejxtd wrote
Reply to comment by supersoldierboy94 in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
You have a lot of angst to work through, my friend. Really, you have built up some divide between research and engineering that simply does not exist.
supersoldierboy94 OP t1_j7f26hq wrote
Reply to comment by redlow0992 in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
He said for production. Meta hasnt produced fully baked production-ready products from their research for public consumption.
That is the point of the post and Yann's reaction as a Meta employee reeks pettiness.
He first told everyone that ChatGPT is not revolutionary at all. May be a fair point. That's debatable. Then proceeds to post a chart about Meta and Google big tech as producers of research that others just consume. Then when asked about what research has they put into production, he claims that it's not that we CANT, it's that we WONT. Then proceeds to bring out what happened to Meta's first trial to do it -- Galactica that embarassingly failed. So all in all, he seems to be criticizing why these companies just consume established knowledge by sprinkling something on top from what they have published.
I'd honestly expect Google and META to be quite cautious now on how they publish stuff since OpenAI's moves build on top of the established research that they do.
No one also said they are publishing junk. That's a strawman. The point is that he's being overly critical to startups like OpenAI who consumes established knowledge that they voluntarily opened to the public and has started to profit from it, while they have failed to produce something profitable or usable for public consumption.