Recent comments in /f/MachineLearning

IDefendWaffles t1_j96kvsk wrote

So I am trying to learn more about electricity and the more I read the less impressed I am about this lightbulb. To put it as shortly as possible it is a glass thing that shines.

So I'm trying to sort of prove this to myself.

Furthermore, The only thing that is going to make lightbulb the thing that everybody says it will be (replace everybodies jobs) is MORE wires, BUT to have more wires you need more power, and lightbulb already uses quantum computing as far as I know and QC progress is pretty much stalling.

20

Flag_Red t1_j96jzng wrote

> I bet stuff like this is gonna be the biggest real life use case for neural networks.

Huh? What about image/face/character/anything recognition, speech-to-text, text-to-speech, translation, natural language understanding, code autocomplete, etc?

52

Battleagainstentropy t1_j96iltm wrote

Yes this reads like an undergrad or someone entirely outside the field who thinks that no one has thought of these concepts before. They are trying a little Cunningham’s Law by stating that nothing is being done in these areas and hoping that someone provides the correct information rather than simply ask the question of what is being done to address these issues.

12

blackhole077 t1_j96g3wh wrote

> I do wish to ask tho, do you think I should instead focus on fine tuning my model and getting more dataset to improve the model? Maybe I'm getting too optimistic about instance segmentation.

I'm glad I've been of assistance. As for your follow-up question, it generally never hurts to have more data to work with and, of course, fine-tuning your existing models (if you have any at this time) can help as well.

I would say though, that you should determine what metrics you're wanting to see from your model first. As you mentioned earlier, you want to ensure that false negatives are as low as possible.

Naturally this translates to maximizing recall, which generally comes at the expense of precision. Thus, the question could be reframed as: "At X% recall how precise will the model be?" and "What parameters to the model can I tune to influence the precision at that recall?"

However, how false positives (FP) and false negatives (FN) and, by proxy, Precision and Recall, are defined is not as straightforward in object detection as it is in image classification.

Since I'm currently dealing with this problem, albeit in a different area altogether, here's a paper that I found useful for getting interpretable metrics:

https://arxiv.org/abs/2008.08115

This paper and its Github repository basically work on breaking down what exactly your model struggles with, as well as showing the FP/FN rates given your dataset. It might be a little unwieldy since it's a tool that has been somewhat neglected by its creator, but it's certainly worth looking into.

Hope this helps.

2

thecodethinker t1_j96dsn8 wrote

I bet stuff like this is gonna be the biggest real life use case for neural networks.

Faster, more portable physics simulations.

We can get infinite training data using naive physics algorithms, then train a model to optimize that

55

photosandphotons t1_j96doro wrote

“Critical applications” will be paying for the type of support you expect, you don’t need to worry about them. Taking feedback, FAQs, and documenting code updates are completely un-novel ideas that exist in industry SAAS products today and they will exist for ML models. They just require you to, you know, actually pay for the developers’ time.

5

Tawa-online t1_j96bpcn wrote

I agree with you somewhat but this isn't windows 11. We're working on some of the more experimental tech rather than stable tech that has been out for years. all of the things you are asking for take time to be implemented, and without predefined systems for how to create these interactions it's pretty difficult to do.

6

zy415 t1_j966hqu wrote

Comparing ICLR to AISTATS/UAI is like comparing apple to orange.

ICLR focuses on deep learning with more architecture stuffs, while AISTATS/UAI focuses more on statistical machine learning (e.g. kernel methods, Bayesian statistics, causal inference, optimization) with more theoretical results. I would argue that NeurIPS/ICML has a combination of both. NeurIPS seems to have more application papers in deep learning and architecture stuffs nowadays.

Thanks to the recent popularity in deep learning, ICLR quickly arises to the "Big 3" machine learning conference. This is just because deep learning has become a major part of machine learning nowadays.

6

__lawless t1_j9617qd wrote

You are not making any sense. Language transformer is not a thing. Google is a search engine, ChatGPT is a LLM. There is no quantum computing involved in chatGPT. It has nothing to do with it at all. I’m gonna leave you with a quote from Billy Madison.

what you've just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul.

43

derek_ml t1_j960t8q wrote

> You seem to argue that you should let AI do it's thing, what it's good at, without interfering

Not necessarily, it's just that we have seen good results by letting compute dominate over interference. If other approaches worked better then we would be doing that.

Maybe in a parallel universe they valued creative approaches over quick results and human interference was valued more, and eventually got far better results but not in this one.

1