Recent comments in /f/MachineLearning

Perfect_Finance7314 t1_j5t45nz wrote

Hello, I have been generating images with StyleGAN2-ada-pytorch in Google colab and I have my genereted images in google drive. I am struggling to find which seed-number an image is. Can someome please help me figure out how do I find the seed number to a specific image?

Thanks a lot!

1

synonymous1964 t1_j5sz1gg wrote

At ICCV 2021, we got a paper accepted with initial reviews of borderline, borderline, weak reject.

If the reviewers' comments are addressable, and you do so in a good rebuttal, there is a chance for acceptance with a bit of luck.

6

Jack7heRapper t1_j5syos0 wrote

Reply to comment by juanigp in [D] CVPR Reviews are out by banmeyoucoward

It's my first submission too, and I'm an undergrad lol.
I've heard from my seniors and professors that changing a reject (1) to borderline (3) or weak accept (4) is difficult and that you need at least all borderlines to have a shot at getting accepted. They still told me to write it anyway for the experience.
Moreover, the confidence level of that reject is 4. The problem is that the reviewer asked for experimental results on an additional dataset, which I didn't work on. So, I'm not really sure how I can improve their score.
The other reviewers weren't too harsh with their reviews and I probably could have convinced them but I don't think I can convince reviewer #3 without quantitative results to back up my claims.

1

juanigp t1_j5swklo wrote

This was my first submission, I had a worse score than you but will write a rebuttal either ways (although I doubt I can convince everyone). Why wouldn't you? I'm not judging, asking out of curiosity as I don't know the "common practice" .

3

serge_cell t1_j5sj24c wrote

Hessian-free second order will not likely work. There are reasons why everyone using gradient descent. The only working second order method seems K-FAC (disclaimer - I have no first hand experience) but as you will use Julia you will have to implement it from scratch, and it's highly non-trivial (as you can expect from method which work where other failed)

3

FinancialElephant t1_j5s5y72 wrote

Flux.jl is the most popular deep learning library in Julia. I've played around with it a little, it's quite nice and easy to use. It is amazing how much more elegant the implementations become in julia compared to python.

There is also the less known Lux.jl package that is essentially an explicitly parameterized Flux (less mature than Flux though).

6

toftinosantolama t1_j5rxq32 wrote

Reply to comment by entarko in [D] CVPR Reviews are out by banmeyoucoward

I don't doubt your experience. Mine is totally different. I've never flipped one reviewer even thought I've always been polite and the reviewer clearly and objectively wrong. They just don't care. They have too much power. The AC just goes with the flow. This is not peer review, it's a joke.

This is not the case in ML conferences, according to my experience, it's a CV thing...

1

entarko t1_j5rx5mc wrote

The "entitlement" of reviewers is, quite often (I would say 50/50), a result of the authors' response. I have reviewed several papers where authors responded to fair comments by dismissing the reviewers and trying to make him feel dumb. That invariably ends up in reviewers not changing / lowering their rating.

Also, PhD students can be good reviewers.

2