Recent comments in /f/MachineLearning

drplan t1_jav01pf wrote

I think the best approach for this is thinking about the search space and the fitness landscape. If different components of the solution vector can independently improve the fitness crossover operators will have a positive impact.

Another aspect is the search space itself. Is it real-valued, is it binary, is it a tree-like structure,..?

Traditionally genetic algorithms are operating on binary encodings, and they often work ok problem which have binary solutions (a fixed-size vector of bits). These problem do not have gradient to start with. However one should investigate beforehand if there are combinatorial approaches to solve the problem.

For real-valued problems with no gradient: evolution strategies with a smart mutation operation like CMA (covariance matrix adaption) would be a good choice.

1

currentscurrents t1_jatvmtm wrote

You're right, I misread it. I thought they held out 4 patients for tests. But upon rereading, their dataset only had 4 patients total and they held out the set of images that were seen by all of them.

>NSD provides data acquired from a 7-Tesla fMRI scanner over 30–40 sessions during which each subject viewed three repetitions of 10,000 images. We analyzed data for four of the eight subjects who completed all imaging sessions (subj01, subj02, subj05, and subj07).

...

>We used 27,750 trials from NSD for each subject (2,250 trials out of the total 30,000 trials were not publicly released by NSD). For a subset of those trials (N=2,770 trials), 982 images were viewed by all four subjects. Those trials were used as the test dataset, while the remaining trials (N=24,980) were used as the training dataset.

4 patients is small by ML standards, but with medical data you gotta make do with what you can get.

I think my second question is still valid though. How much of the image comes from the brain data vs from the StableDiffusion pretraining? Pretraining isn't inherently bad - and if your dataset is 4 patients, you're gonna need it - but it makes the results hard to interpret.

2

OrangeYouGlad100 t1_jatt83m wrote

This is what they wrote:

"For a subset of those trials (N=2,770 trials), 982 images were viewed by all four subjects. Those trials were used as the test dataset, while the remaining trials (N=24,980) were used as the training dataset."

That makes it sound like 982 images were not used for training

2

A_HumblePotato t1_jati59p wrote

Looks interesting, but as another user pointed out not particularly novel (aside from the decoder model being used). One thing I wish these studies did is to test these models on subjects that weren’t used for training of the model, to see if these methods generalize to several people (or at least a few-shot training/testing on new subjects). I do actually like the idea of using latent diffusion models for these tasks, as long-term our brain does not store perfect reconstruction of images.

23

SleekEagle OP t1_jaszawj wrote

It looks like, rather than conditioning on text they condition on the fMRI, but it's unclear to me exactly how they map between the two and why this would even work without finetuning. TBH I haven't had time to read the paper so I don't know the details, but figured I'd drop the paper in case anyone was interested!

7

Zestyclose-Debt-4712 t1_jasycf3 wrote

Does this research make any real sense? Creating a low resolution image from brain activity has been done before and is amazing. But using a pretrained denoising network on the noisy image will add just details that have nothing to do with the brain activity. Just like those ai-„enlarge/zoom“ models imagine/add details that never were in the original picture.

Or am I missing something here and they address the issue?

18

currentscurrents t1_jasxijr wrote

I'm a wee bit cautious.

Their test set is a set of patients, not images, so their MRI->latent space model has seen every one of the 10,000 images in the dataset. Couldn't it simply have learned to classify them? Previous work has very successfully classified objects based on brain activity.

How much information are they actually getting out of the brain? They're using StableDiffusion to create the images, which has a lot of world knowledge about images pretrained into it. I wish there was a way to measure how of the output image is coming from the MRI scan vs from StableDiffusion's world knowledge.

16

xGovernor t1_jasx7r9 wrote

You needed the secret api key, included with the plus edition. Prior to Whispers I don't believe you could obtain a secret key. Also gave early access to new features and provides me turbo day one. Also I've used to much more and got turbo to work with my plus subscription.

Had to find a workaround. Don't feel scammed. Plus I've been having too much fun with it.

1