Recent comments in /f/MachineLearning
EscanorFTW t1_j6d9ee8 wrote
Reply to [D] Simple Questions Thread by AutoModerator
What are some good places to start if you are just getting into ML/Ai? Pls share useful links/resources
throwaway2676 t1_j6d99fw wrote
Reply to comment by currentscurrents in [R] Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers by currentscurrents
So shouldn't this mean we can train transformers using forward passes alone? It seems that it wouldn't be too difficult to derive an algorithm that updates the attention weights based on these results, but I don't believe the authors mention the possibility.
Artgor t1_j6d8pd5 wrote
Reply to comment by shaner92 in [D] How do people keep up with ML news that is not NLP related? by shaner92
I divide the links into 3 groups:
- relevant to what I'm doing currently - I open and read them;
- possibly interesting - for example, I'm interested in NLP, CV, not interested in time-series or audio. I save this links into relevant folders;
- not in my area of interest - ignore them;
LetWrong1932 t1_j6d7hoq wrote
Reply to comment by AvailablePresent1113 in [D] CVPR Reviews are out by banmeyoucoward
i've heard there was a B at post rebuttal last year so maybe there is for cvpr
[deleted] t1_j6d77rg wrote
[removed]
shaner92 OP t1_j6d5yr7 wrote
Reply to comment by Artgor in [D] How do people keep up with ML news that is not NLP related? by shaner92
>How does information gathering differ between those in Applied ML and AI researchers (or even further, between those in Business Analytics and those in more 'AI' fields)
I had Data Elixir, will check the rest. Maybe it's time to trim some of the other newsletters that were probably 'influencers' trying to get easy news items off of ChatGPT.
Curious though, do you get these newsletters for general ML news, and focus on industry specifics for use cases? Or try to keep up with research papers in your area?
trnka t1_j6d5fbk wrote
Reply to comment by [deleted] in [D] Simple Questions Thread by AutoModerator
I think most people split by participant. I don't remember if there's a name for that, sorry! Hopefully someone else will chime in.
If you have data from multiple hospitals or facilities, it's also common to split by that because there can be hospital-specific things in the data and you really want your evaluation to estimate the quality of the model for patients not in your data at hospitals not in your data.
starstruckmon t1_j6d3lsr wrote
Reply to comment by Maximum-Nectarine-13 in [D] MusicLM: Generating Music From Text by carlthome
I can guarantee the next paper out of this Google team is going to be a diffusion model ( instead of AudioLM ) conditioned on MuLan embeddings.
The strength of the Google model is the text understanding which is coming from the MuLan embeddings. While the strength of the work you highlighted is the quality from the diffusion model.
It's the obvious next step following the same path as Dalle1->Dalle2.
Artgor t1_j6d2xjw wrote
First of all, it is important to understand that we can't keep up with everything. There are too many things happening around us to be able to know all of them.
That being said, I'm subscribed to the following newsletters:
- Data Elixir
- Data Machina
- DataScienceWeekly
They cover most of the advances, I think.
Redditing-Dutchman t1_j6cteqn wrote
Reply to [D] Is MusicGPT a viable possibility? by markhachman
I think copyright is more an issue than with artwork. Human brains are so sensitive and well trained on music that you immediately recognise a familiar tune. Plus the music industry as a whole is much more sensitive in copyright aspects. Maybe because there is a lot of money involved in it. Not sure. I can understand why Google keep theirs away from the public for now.
gamerx88 t1_j6cqerx wrote
It's not about large data or number of parameters. OpenAI has not actually revealed details regarding ChatGPT's architecture and training. What is special is the fine-tuning procedure -- alignment through RLHF on the underlying LLM (nicknamed GPT3.5) that is extremely good at giving "useful" responses to prompts\instructions.
Prior to this innovation, zero-shot and in-context few-shot learning with LLM was hardly working. Users had to trial and error their way to some obtuse prompt to get the LLM to generate some sensible response to their prompt, if it even worked at all. This is because LLM pre-training is purely about language structure without accounting for intent (what the human wishes to obtain via the prompt). Supervised fine-tuning based on instructions and output pairs helped but not by much. With RLHF however, the process is so effective that a mere 6B parameter model (fine-tuned with RLHF) is able to surpass a 175B parameter model. Check out the InstructGPT paper for details.
vivehelpme t1_j6cno58 wrote
Reply to comment by currentscurrents in [N] OpenAI has 1000s of contractors to fine-tune codex by yazriel0
22 hours of video content per day?
AvailablePresent1113 t1_j6cnhq4 wrote
Reply to comment by LetWrong1932 in [D] CVPR Reviews are out by banmeyoucoward
Well I previously submitted to ICCV so it's just my guess as ICCV did not have B post rebuttal. So B after rebuttal is still valid?
Iffysituation t1_j6cn3qd wrote
Reply to [D] Meta AI Residency 2023 by BeautyInUgly
I wanted to apply, but I didn't finish my essay in time 😠Are there any other residencies you all are looking at?
omgpop t1_j6cmydr wrote
Reply to comment by JohnConquest in [R] InstructPix2Pix: Learning to Follow Image Editing Instructions by Illustrious_Row_9971
There’s Buzz.
JohnConquest t1_j6cmp5z wrote
Reply to comment by nmkd in [R] InstructPix2Pix: Learning to Follow Image Editing Instructions by Illustrious_Row_9971
Fantastic! All your AI GUIs are great stuff.
Would love to see a GUI for Whisper sooner or later, not really a good, all-in-one install for it out there AFAIK.
whisp96 t1_j6cm1c7 wrote
Nice interface
luaks1337 t1_j6clz9v wrote
Reply to comment by VirtualHat in [N] OpenAI has 1000s of contractors to fine-tune codex by yazriel0
Ah, I thought you meant that video and audio would be the next step for text mining.
I believe OpenAI confirmed that they already work on a text to video model. My guess would be that current algorithms could do that but that it would be far to expensive to train on videos.
VirtualHat t1_j6ckblf wrote
Reply to comment by luaks1337 in [N] OpenAI has 1000s of contractors to fine-tune codex by yazriel0
I was thinking next frame prediction, perhaps conditioned on the text description or maybe a transcript. The idea is you could then use the model to generate a video from a text prompt.
I suspect this is far too difficult to achieve with current algorithms. It's just interesting that the training data is all there, and would be many, many orders of magnitude larger than GPT-3's training set.
LetWrong1932 t1_j6cj48c wrote
Reply to comment by AvailablePresent1113 in [D] CVPR Reviews are out by banmeyoucoward
why is there no B after rebuttal?
nmkd t1_j6cipol wrote
In case someone is interested, I implemented this in my Stable Diffusion Windows GUI:
(Source Code: https://github.com/n00mkrad/text2image-gui/)
luaks1337 t1_j6chxhv wrote
Reply to comment by VirtualHat in [N] OpenAI has 1000s of contractors to fine-tune codex by yazriel0
Also spoken words differ a lot from thoughtful written text. Training on the 1:1 transcription would yield bad results in terms of grammar and readability. They could solve this by using a GPT model to rewrite the transcription but then you're training AI on AI which could lead to bias.
MemeBox t1_j6ch49b wrote
Reply to comment by Hannekiii in [R] META presents MAV3D — text to 3D video by SpatialComputing
not yet.
isellmyart t1_j6cghdi wrote
Reply to comment by regular-jackoff in [D] MusicLM: Generating Music From Text by carlthome
And hyperspecialization
:)
Low_Basil9900 t1_j6da8hb wrote
Reply to [R] InstructPix2Pix: Learning to Follow Image Editing Instructions by Illustrious_Row_9971
All AI art is gross and you can't convince me otherwise.