Recent comments in /f/MachineLearning
ukamal6 t1_j57v7mu wrote
Reply to comment by Mephisto6 in [D] ICLR 2023 results. by East-Beginning9987
Ahh yes, you're right!
velcher t1_j57upp2 wrote
Reply to comment by ukamal6 in [D] ICLR 2023 results. by East-Beginning9987
congrats! spotlight here as well. I guess we'll see each other in Kigali :)
suflaj t1_j57te64 wrote
Reply to comment by Professional_Ball_58 in [D] Not sure if time series or multiple classifications? by spiritualquestions
It's not necessarily better, but it will help you if your data is not really abundant...
For an example, if you look at it as regression, then the model uses your features and tries to figure out how correlated they are with the grade. Your grade is continuous and monotonous, meaning that if the features contribute in "sane" ways to the grade, it will map easily.
If you consider it a classification problem, then each class has basically its own degree of freedom. This could cause your model to be overconfident, whereas with the regression solution at the very least your model is going to try and fit it to a continuous monotonous function.
With the regression task, you are implicitly telling your model that grade 2 is better than 1 and worse than 3. But with a classification model, because each class can be independent, your model can only learn this implicitly through data. Which means that if your data is insufficient for the model to learn it, it won't work, whereas with a regression task, if your data is insufficient, it might still interpolate correctly.
instantlybanned t1_j57tcwr wrote
Reply to comment by CupcakeCleric in [D] ICLR 2023 results. by East-Beginning9987
Hope you have more luck next time!
Mephisto6 t1_j57swaz wrote
Reply to comment by ukamal6 in [D] ICLR 2023 results. by East-Beginning9987
I think everyone that is either top-5 or top-25 gets an oral. There are no specific spotlights this year.
Professional_Ball_58 t1_j57s95g wrote
Reply to comment by suflaj in [D] Not sure if time series or multiple classifications? by spiritualquestions
Thats interesting so if we are able to convert the labels into a numeric value where higher numbers are better and vise versa then regression is better? Can you please extend on what bias would help in this case?
suflaj t1_j57rz2o wrote
Reply to comment by Professional_Ball_58 in [D] Not sure if time series or multiple classifications? by spiritualquestions
It would but if you did classification you are enabling the model to overfit itself on data more easily. You ca represent it as a classification problem (classification problems are just regression with a cutoff), but naturally it seems like more of a regression problem.
aidv t1_j57rrqa wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
I believe that this might be fully true. I’ll tell you why:
I don’t know how many times I’ve felt like the voice of speakers in videos have sounded like they are AI generated.
Like, voices of people that I subscribe to.
I am was convinced that they were doing some AI fuckery, and this post only pretty much confirms it.
It’s probably to save bandwidth and storage on site, so makes sense.
JClub OP t1_j57rrn6 wrote
Reply to comment by Ouitos in [R] A simple explanation of Reinforcement Learning from Human Feedback (RLHF) by JClub
ah yes, you're right. I actually don't know why, but you can check the implementation and ask it on GitHub
Professional_Ball_58 t1_j57rolp wrote
Reply to comment by suflaj in [D] Not sure if time series or multiple classifications? by spiritualquestions
But wouldnt this also work for multi classification? If the numbers werent shown to be numbers and say the y values were good,bad,etc..wouldnt this be a classification problem?
aidv t1_j57r35e wrote
Reply to comment by Avelina9X in [D] Did YouTube just add upscaling? by Avelina9X
Then check the JS source code
East-Beginning9987 OP t1_j57r32x wrote
Reply to comment by tfburns in [D] ICLR 2023 results. by East-Beginning9987
Right, not public yet
tfburns t1_j57r0r1 wrote
Reply to comment by East-Beginning9987 in [D] ICLR 2023 results. by East-Beginning9987
I found it by logging in. Maybe they aren't public yet?
tfburns t1_j57qxsc wrote
Reply to comment by spionski in [D] ICLR 2023 results. by East-Beginning9987
Ahh, I had to log in to see the meta review.
East-Beginning9987 OP t1_j57qimn wrote
Reply to comment by tfburns in [D] ICLR 2023 results. by East-Beginning9987
I can see decision and meta review comment on openreview
Glum-Bookkeeper1836 t1_j57qhk9 wrote
Reply to comment by Avelina9X in [D] Did YouTube just add upscaling? by Avelina9X
It might be anything in your particular tech stack, not just YouTube. Very interesting though, about time too.
spionski t1_j57qanx wrote
Reply to comment by tfburns in [D] ICLR 2023 results. by East-Beginning9987
Yes! (I meant to comment that decisions are out, not reviews - sorry.)
tfburns t1_j57q3cu wrote
Reply to comment by spionski in [D] ICLR 2023 results. by East-Beginning9987
Can you see your meta review?
tfburns t1_j57q0pu wrote
Reply to [D] ICLR 2023 results. by East-Beginning9987
I'm not seeing area chair meta reviews or decisions on OpenReview yet. Anyone see them on their papers? I got an email only.
CupcakeCleric t1_j57ofaz wrote
Reply to [D] ICLR 2023 results. by East-Beginning9987
Welp, rejected. To be fair, I was hoping for an AC stroke of luck, which by definition is rare. Guess I'll drink something and start working on the ICML submission.
ukamal6 t1_j57miqb wrote
Reply to [D] ICLR 2023 results. by East-Beginning9987
First first-authored paper here, got accepted with a spotlight!
axm92 t1_j57jfrk wrote
Reply to comment by Low-Mood3229 in [R] Is there a way to combine a knowledge graph and other types of data for ML purposes? by Low-Mood3229
>My use case is more classification of datapoints(containing many seemingly unimportant features that may or may not have some relationship to each other. Relationships that are captured in the knowledge graph
Sounds eerily close to one of our paper: https://aclanthology.org/2021.emnlp-main.508.pdf
To solve commonsense reasoning questions, we first generate a graph that can capture relationship between entities in the question (if you're thinking "chain-of-thought" prompting--yes, the idea is similar). Then, we jointly train a mixture-of-experts model with a classifier (RoBERTa) to do three things: i) learn to discard useless nodes, ii) pool node representations from useful nodes into a single graph embedding, and iii) classify using question + graph embeddings.
​
This video may give a good TLDR too.
spionski t1_j57hivt wrote
Reply to [D] ICLR 2023 results. by East-Beginning9987
Decisions are out!
suflaj t1_j57gnky wrote
Reply to comment by spiritualquestions in [D] Not sure if time series or multiple classifications? by spiritualquestions
Well this is a regression task, not classification. You could classify 1, 2, 3 and 4 for each output, but it seems like they are continuous. You can always just truncate your result, ex. with y = max(1, min(4, ceil(x + 0.5))). With classification you could argmax a class, but then you'll overfit more easily. You would probably benefit from the bias coming from the regression task itself telling the algorithm that 2 is close to 3 and 1, but far away from 4.
ukamal6 t1_j57v9um wrote
Reply to comment by velcher in [D] ICLR 2023 results. by East-Beginning9987
Congrats to you as well!! Hope to see you there! :)