Recent comments in /f/MachineLearning
toftinosantolama t1_j5rvxa3 wrote
Reply to comment by entarko in [D] CVPR Reviews are out by banmeyoucoward
Well the rating could be hidden... Not that this is the problem, the problem is that the reviews are really entitled and not willing to stand corrected. I've these so many times. And I'd bet this kind of reveiwrs are phd students, not very clever ones.
entarko t1_j5rvhgc wrote
Reply to comment by toftinosantolama in [D] CVPR Reviews are out by banmeyoucoward
Since there is a discussion period, not having access to the initial reviews would only be a waste of time. As a reviewer, you would re-write arguments from your review, which could have simply been read from the initial review.
toftinosantolama t1_j5ruw81 wrote
Reply to comment by entarko in [D] CVPR Reviews are out by banmeyoucoward
I might be wrong of course, that's my impression from a discussion earlier today. BTW I think it'd be more fair not to know.
entarko t1_j5rujuz wrote
Reply to comment by toftinosantolama in [D] CVPR Reviews are out by banmeyoucoward
There is an official discussion period between reviewers and AC starting on the 31st of January. It would be weird not to know other reviewers ratings. It would be unprecedented, as it was the case for, at least, the last three years.
toftinosantolama t1_j5rtquc wrote
Reply to comment by entarko in [D] CVPR Reviews are out by banmeyoucoward
They won't know about the accept afaik.
toftinosantolama t1_j5rtjj8 wrote
Reply to comment by KrakenInAJar in [D] CVPR Reviews are out by banmeyoucoward
In me experience this is absolutely rare. They don't flip.
Equivalent_Future207 t1_j5rth3t wrote
Reply to [D] CVPR Reviews are out by banmeyoucoward
3 reviewers, 2 weak reject, 1 borderline. I will do my best... but I'm not sure whether the reviews are flipped.
toftinosantolama t1_j5rtegr wrote
Reply to comment by sskdkn_pl in [D] CVPR Reviews are out by banmeyoucoward
I don't have an answer to your question, but given that the reviewers don't know the scores of the rest of the reviewers during the rebuttal, a good rebuttal could raise potentially all of them, but even with 2 borderlines being raised to weak accepts, accept should be possible. Good lucks folks.
toftinosantolama t1_j5rsup2 wrote
Reply to comment by Rolling_Pig in [D] CVPR Reviews are out by banmeyoucoward
Should be. Fightable, you should do a good rebuttal. Only hope the reviewers are not the classic cvpr reviewers (ignorant juniors)...
banmeyoucoward OP t1_j5ros4d wrote
Reply to comment by bombay_doors in [D] CVPR Reviews are out by banmeyoucoward
The people want to know
limpbizkit4prez t1_j5rmvr6 wrote
I know you said you are interested in MATLAB or Julia, but I'm interested in why not a python library? I mean a simple Google search would show lots of pytorch HFO solutions.
[deleted] t1_j5rf6rx wrote
Reply to comment by Rolling_Pig in [D] CVPR Reviews are out by banmeyoucoward
[deleted]
tornado28 t1_j5rdebv wrote
Sorry to be skeptical but I don't think this is really why your one run was better than the other. I think you also changed something else inadvertently.
Rainandblame t1_j5rc318 wrote
Reply to [D] CVPR Reviews are out by banmeyoucoward
2 borderline (3), 1 weak accept (4) and 1 weak reject (2). Any chances with this?
PredictorX1 t1_j5rb8gp wrote
>I was in the understanding that two contiguous linear layers in a NN would be no better than only one linear layer.
This is correct: In terms of the functions they can represent, two consecutive linear layers are algebraically equivalent to one linear layer.
LetterRip t1_j5ratja wrote
They learn faster/more easily. You can collapse them down to a single layer after training.
arg_max t1_j5r8qe6 wrote
Reply to comment by gunshoes in [D] are two linear layers better than one? by alex_lite_21
What do you mean by "function represented by a neural network"? If you are hinting in the direction of universal approximation, then yes, you can learn any continuous function arbitrarily close with a single layer, sigmoid activation and infinite width. But similarly, there exist some results that show you can achieve a similar statement with a width-limited and "infinite depth" network (the required depth is not infinite but depends on the function you want to approximate and is afaik unbounded over the space of continuous functions). In practice, we are far away from either infinite width or depth so specific configurations can matter.
sskdkn_pl t1_j5r7u9b wrote
Reply to comment by goldemerald in [D] CVPR Reviews are out by banmeyoucoward
I also got this… Anyone with this score got accepted in the past?
didroth t1_j5r6r2j wrote
Reply to [D] CVPR Reviews are out by banmeyoucoward
1 weak reject (2), 1 borderline (3), 1 weak accept (4), 1 accept (5).
A split decision!
HateRedditCantQuitit t1_j5r5f69 wrote
Reply to comment by [deleted] in [D] are two linear layers better than one? by alex_lite_21
You can represent any `m x n` matrix with the product of some `m x k` matrix with a `k x n` matrix, so long as k >= min(m, n). If k is less than that, you're basically adding regularization.
Imagine you have some optimal M in Y = M X. Then if A and B are the right shape (big enough in the k dimension), they can represent that M. If they aren't big enough, then they can't learn that M. If the optimal M doesn't actually need a zillion degrees of freedom, then having a small k bakes that restriction into the model, which would be regularization.
Look up linear bottlenecks.
suflaj t1_j5r4u61 wrote
Dropout is not strictly a linear function (it can be randomly), and the chances are that it will add non-linearity for p>0, so yeah, that probably made the difference.
arsenyinfo t1_j5r4b5l wrote
Reply to comment by SimonJDPrince in [P] New textbook: Understanding Deep Learning by SimonJDPrince
Random ideas from the top of my head:
- intro why transfer learning works;
- old but good https://cs231n.github.io/transfer-learning/;
- a concept of catastrophic forgetting;
- some intuition on answering empirical questions like what layers should be frozen, how to adapt LR etc.
[deleted] t1_j5r3opg wrote
Reply to comment by HateRedditCantQuitit in [D] are two linear layers better than one? by alex_lite_21
[deleted]
[deleted] t1_j5r394s wrote
[deleted]
Gemabo t1_j5rwh5f wrote
Reply to [R] Easiest way to train RNN's in MATLAB or Julia? by NadaBrothers
Matlab has a deep learning Toolbox that makes it easy and efficient to train any type of model. Including RNNs. Although, there is a good argument (and a famous paper) that anything you can do with RNN you can do better with CNN. Julia has deep learning libraries, but don't expect nearly the level of support and ease of use as Matlab. Matlab's DL is underrated.