Recent comments in /f/MachineLearning

kweu t1_ja8ur2q wrote

I work with data of similar size and I use random crops during training and a sliding window for prediction. For example you could train to segment 128x128-sized crops of the input images, then put the predictions together to segment the image at full resolution and keep your 200 classes probably. But tbh 200 sounds a bit excessive anyway

5

synonymous1964 t1_ja8opaq wrote

Now I just need to hope that my Canadian visa doesn't actually take 209 days to process...

Honestly needing a visa to attend conferences is a big disadvantage - networking at conferences leads to future employment and research opportunities, and can have a huge impact on the career of early-stage researchers.

5

coconautico OP t1_ja8dbew wrote

No, I don't, because even if chatGPT could answer my question correctly, that doesn't mean that another assistant could.

Therefore, when I come up with a question that, from my point of view could be challenging to answer by a virtual assistant, and regardless of whether I have searched Google/Reddit/StackOverflow/ChatGPT/... for the answer, I end up typing it on OpenAssistant, (again, just my question).

2

cthorrez t1_ja8d6oc wrote

Not exactly. In batch RL the data they train on are real (state, action, next state, reward) tuples from real agents interacting with real environments.

They improve the policy offline. In RLHF there actually is no env. And the policy is just standard LLM decoding.

1