Recent comments in /f/MachineLearning
CodeAllDay1337 t1_j5xyiux wrote
I think this one is a good start:
Differentiation of Blackbox Combinatorial Solvers
https://arxiv.org/abs/1912.02175
keisukegoda3804 t1_j5xy9x3 wrote
Reply to comment by Kacper-Lukawski in [D] Efficient retrieval of research information for graduate research by [deleted]
Do you happen to know how fast it is compared to other services that build-in filtering inside their vector search (pinecone, milvus, etc.)?
marcelomedre t1_j5xxv7t wrote
Reply to [D] Simple Questions Thread by AutoModerator
Hi, I have a question about k-means. I have a data frame with 100 variables after removing low variance and high correlated ones. I know that the data must be normalized for the kmeans, specially to remove the range dependency, but I am facing a problem that if I do normalize my data the algorithm is not properly separating the clusters. I have 3 variables ranges in my data:
- 0-10^4;
- -10^3 - 10^3;
- 0 - 10^3
I have at least 5 very specific clusters that I could characterize by not scaling the data, but I am not comfortable with this procedure.
I couldn’t find a reasonable explanation with is the algorithm performing better in non-scaled data instead of the scaled one.
Gershel t1_j5xxpny wrote
Reply to [D] CVPR Reviews are out by banmeyoucoward
1 WR (2) and 2 WA (4). Don't think it gets in unless I get to change the WR with a good rebuttal. But he/she basically didn't get the motivation and didn't understand the method...
its_ya_boi_Santa t1_j5xwuc9 wrote
Reply to comment by hellrail in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
The sentiment still stands, I hope you get out of this rut your in. "This too shall pass" as they say.
hellrail t1_j5xwq72 wrote
Reply to comment by its_ya_boi_Santa in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Wrong. Its the very same account.
And in your previous answer, when u thought i was sb different, you already explained why you did it, now you claim to not remember. Hahaha. You are contradictory and nonsense as usual.
its_ya_boi_Santa t1_j5xworg wrote
Reply to comment by hellrail in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
I have no idea what you wrote it didn't bother me enough to remember it all this time later, if you made a new account just to come back to your old arguments that's really sad dude, I hope you can better yourself and have a good life
hellrail t1_j5xsq46 wrote
Reply to comment by its_ya_boi_Santa in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Ah, so there was nothing wrong in my statement but you just wanted to be obnoxious?
Good that you admit it.
PS: im the guy haha
its_ya_boi_Santa t1_j5xqz3k wrote
Reply to comment by hellrail in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
The guy was just being obnoxious hence all the deleted comments and starting his replies with "wrong." before writing huge walls of text
olegranmo OP t1_j5xpnj2 wrote
Reply to comment by deeceeo in [R] Tsetlin Machine in Medical Research - Striking Differences Between Tsetlin Machine Interpretability and Deep Learning Attention by olegranmo
Great question! Rudin et al.’s approach elegantly builds an optimal decision tree through search. TM learns online, processing one example at a time, like a neural network. Also, like logistic regression, TM adds up evidence from different features, however, it builds non-linear logical rules, instead of operating on single features. TM also supports convolution for image processing and time series. It can also learn from penalties and rewards addressing the contextual bandit problem. Finally, TMs allow self-supervised learning by means of an auto-encoder. So, quite different from decision trees.
Kacper-Lukawski t1_j5xp10a wrote
Reply to comment by keisukegoda3804 in [D] Efficient retrieval of research information for graduate research by [deleted]
Each vector may have a payload object: https://qdrant.tech/documentation/payload/ Payload attributes can be used to make some additional constraints on the search results: https://qdrant.tech/documentation/filtering/ The unique feature is the filtering is already built-in into the vector search phase, so there is no need to pre- or postfilter the results.
w2ex t1_j5xo3bp wrote
Reply to [D] Pretraining for CNN by Dense-Smf-6032
There are of course pre-trained models for CNN (most of the time it is pre-trained on ImageNet in a supervised manner). If you talk about self-supervised pre-training for CNN specifically have a look at the recent papers Convnext 2 or SparK (BERT-style pretraining for convnets).
currentscurrents t1_j5xnyrw wrote
Reply to comment by mudkip-hoe in Machine learning and black box numerical solver[D] by Due-Wall-915
Link for the lazy: https://arxiv.org/abs/1806.07366
rapist1 t1_j5xmv9n wrote
Reply to comment by koolaidman123 in [D] Self-Supervised Contrastive Approaches that don’t use large batch size. by shingekichan1996
How do you implement the cacheing? You have to cache all the activations to do the bawards pass
KingsmanVince t1_j5xk2e3 wrote
Reply to comment by nins_ in [D] Pretraining for CNN by Dense-Smf-6032
Related link: https://keremturgutlu.github.io/self_supervised/#Vision
nins_ t1_j5xjw35 wrote
Reply to [D] Pretraining for CNN by Dense-Smf-6032
Do you mean self-supervised learning for CNNs? SimCLR does work on CNNs. Also check out SOCO, SCRL, BYOL - there's a lot.
veb101 t1_j5xjuy0 wrote
Reply to [P] Diffusion models best practices by debrises
I'm also starting a similar project, but it just involves writing DDPM from scratch. In the past few days I saw some papers regarding diffusion in the medical domain, maybe you can skim through that and see how they are used in that domain.
KingsmanVince t1_j5xicem wrote
Reply to comment by Daango_ in [D] Pretraining for CNN by Dense-Smf-6032
Or like this https://keras.io/api/applications/ ?
Daango_ t1_j5xewol wrote
Reply to [D] Pretraining for CNN by Dense-Smf-6032
dineNshine t1_j5xeqvi wrote
Reply to comment by mirrorcoloured in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Embedding watermarks into images directly is one thing. OP suggested changing model parameters such that the model produces watermarked images, which is different. Editing model parameters in a functionally meaningful way would be hard without affecting performance. It seems like you are referring to a postprocessing approach, which is along the lines of what I recommended in general for curating model outputs. In this instance, this kind of solution wouldn't perform the function OP intended, which is preventing users from generating images without the watermark (since postprocessing is not an integral part of the model and is easy to remove from the generation process).
It is conceivable that the parameters could be edited in an otherwise non-disruptive way, although unlikely imo. I don't like this kind of approach in general though. The community seems to channel a lot of energy into making these models worse to "protect people from themselves". I despise this kind of intellectual condescension.
mudkip-hoe t1_j5xee6f wrote
Look at Neural ODE from Neurips 2018
zaptrem t1_j5xacgt wrote
Reply to [P] Diffusion models best practices by debrises
Try to get your data as close as possible to a normal distribution with a low variance. What type of data?
catndante t1_j5x6k5e wrote
Reply to [D] Simple Questions Thread by AutoModerator
Hi, i have a simple question about DDPM model. I'm not so sure, but I think I have read the post saying that when T=1000, using 1,000 models will perform better but its computationally too redundant, so DDPM used same model for evert step t. Is this argument correct? If centers with huge computation does this, will the performance be better?
hellrail t1_j5x5u8o wrote
Reply to comment by its_ya_boi_Santa in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
And what exactly is wrong about the statement?
kdqg t1_j5xzfx4 wrote
Reply to [D] Self-Supervised Contrastive Approaches that don’t use large batch size. by shingekichan1996
VICReg