Recent comments in /f/MachineLearning
IDefendWaffles t1_j96kvsk wrote
Reply to [D] Is Google a language transformer like ChatGPT except without the G (Generative) part? by Lets_Gooo_123
So I am trying to learn more about electricity and the more I read the less impressed I am about this lightbulb. To put it as shortly as possible it is a glass thing that shines.
So I'm trying to sort of prove this to myself.
Furthermore, The only thing that is going to make lightbulb the thing that everybody says it will be (replace everybodies jobs) is MORE wires, BUT to have more wires you need more power, and lightbulb already uses quantum computing as far as I know and QC progress is pretty much stalling.
Flag_Red t1_j96jzng wrote
Reply to comment by thecodethinker in [R] neural cloth simulation by LegendOfHiddnTempl
> I bet stuff like this is gonna be the biggest real life use case for neural networks.
Huh? What about image/face/character/anything recognition, speech-to-text, text-to-speech, translation, natural language understanding, code autocomplete, etc?
Battleagainstentropy t1_j96iltm wrote
Reply to comment by TransitoryPhilosophy in [D] Lack of influence in modern AI by I_like_sources
Yes this reads like an undergrad or someone entirely outside the field who thinks that no one has thought of these concepts before. They are trying a little Cunningham’s Law by stating that nothing is being done in these areas and hoping that someone provides the correct information rather than simply ask the question of what is being done to address these issues.
violet_zamboni t1_j96hcou wrote
Reply to comment by TransitoryPhilosophy in [D] Lack of influence in modern AI by I_like_sources
If you don’t like the trained matrix, OP, you can go train your own
blackhole077 t1_j96g3wh wrote
Reply to comment by Old_Scallion2173 in [D] bounding box or instance segmentation by Old_Scallion2173
> I do wish to ask tho, do you think I should instead focus on fine tuning my model and getting more dataset to improve the model? Maybe I'm getting too optimistic about instance segmentation.
I'm glad I've been of assistance. As for your follow-up question, it generally never hurts to have more data to work with and, of course, fine-tuning your existing models (if you have any at this time) can help as well.
I would say though, that you should determine what metrics you're wanting to see from your model first. As you mentioned earlier, you want to ensure that false negatives are as low as possible.
Naturally this translates to maximizing recall, which generally comes at the expense of precision. Thus, the question could be reframed as: "At X% recall how precise will the model be?" and "What parameters to the model can I tune to influence the precision at that recall?"
However, how false positives (FP) and false negatives (FN) and, by proxy, Precision and Recall, are defined is not as straightforward in object detection as it is in image classification.
Since I'm currently dealing with this problem, albeit in a different area altogether, here's a paper that I found useful for getting interpretable metrics:
https://arxiv.org/abs/2008.08115
This paper and its Github repository basically work on breaking down what exactly your model struggles with, as well as showing the FP/FN rates given your dataset. It might be a little unwieldy since it's a tool that has been somewhat neglected by its creator, but it's certainly worth looking into.
Hope this helps.
PHEEEEELLLLLEEEEP t1_j96ezic wrote
You could try existing cell segmentation algos like stardist or cell pose
thecodethinker t1_j96dsn8 wrote
Reply to [R] neural cloth simulation by LegendOfHiddnTempl
I bet stuff like this is gonna be the biggest real life use case for neural networks.
Faster, more portable physics simulations.
We can get infinite training data using naive physics algorithms, then train a model to optimize that
photosandphotons t1_j96doro wrote
Reply to comment by I_like_sources in [D] Lack of influence in modern AI by I_like_sources
“Critical applications” will be paying for the type of support you expect, you don’t need to worry about them. Taking feedback, FAQs, and documenting code updates are completely un-novel ideas that exist in industry SAAS products today and they will exist for ML models. They just require you to, you know, actually pay for the developers’ time.
photosandphotons t1_j96ckml wrote
Reply to comment by limpbizkit4prez in [D] Lack of influence in modern AI by I_like_sources
They just want to complain about free products not providing the level of support they expect while probably not contributing much, if anything, themselves.
Tawa-online t1_j96bpcn wrote
Reply to comment by I_like_sources in [D] Lack of influence in modern AI by I_like_sources
I agree with you somewhat but this isn't windows 11. We're working on some of the more experimental tech rather than stable tech that has been out for years. all of the things you are asking for take time to be implemented, and without predefined systems for how to create these interactions it's pretty difficult to do.
TransitoryPhilosophy t1_j969v46 wrote
Reply to [D] Lack of influence in modern AI by I_like_sources
These examples are just wrong, OP. For SD (example 1) there are multiple avenues for making model updates to fine tune. I guess I think your base premise is incorrect
damc4 t1_j96888m wrote
By the way, I created a tool "CodeAssist" ( https://codeassist.tech ) that is based on a similar idea. It's a chatbot that can execute actions in the IDE (most importantly - write/read the code in your editor).
zy415 t1_j966hqu wrote
Reply to comment by MustachedLobster in [R] difference between UAI and AISTATS ? by ArmandDerech
Comparing ICLR to AISTATS/UAI is like comparing apple to orange.
ICLR focuses on deep learning with more architecture stuffs, while AISTATS/UAI focuses more on statistical machine learning (e.g. kernel methods, Bayesian statistics, causal inference, optimization) with more theoretical results. I would argue that NeurIPS/ICML has a combination of both. NeurIPS seems to have more application papers in deep learning and architecture stuffs nowadays.
Thanks to the recent popularity in deep learning, ICLR quickly arises to the "Big 3" machine learning conference. This is just because deep learning has become a major part of machine learning nowadays.
Sir_Rade t1_j964nxx wrote
Reply to [R] neural cloth simulation by LegendOfHiddnTempl
Cool paper, thanks for sharing!
limpbizkit4prez t1_j9644lp wrote
Reply to comment by [deleted] in [D] Lack of influence in modern AI by I_like_sources
What is your deal? Why are you being such a dick to everyone? It seems like you just want to yell at people, not have a discussion.
__lawless t1_j9617qd wrote
Reply to [D] Is Google a language transformer like ChatGPT except without the G (Generative) part? by Lets_Gooo_123
You are not making any sense. Language transformer is not a thing. Google is a search engine, ChatGPT is a LLM. There is no quantum computing involved in chatGPT. It has nothing to do with it at all. I’m gonna leave you with a quote from Billy Madison.
what you've just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul.
derek_ml t1_j960t8q wrote
Reply to comment by I_like_sources in [D] Lack of influence in modern AI by I_like_sources
> You seem to argue that you should let AI do it's thing, what it's good at, without interfering
Not necessarily, it's just that we have seen good results by letting compute dominate over interference. If other approaches worked better then we would be doing that.
Maybe in a parallel universe they valued creative approaches over quick results and human interference was valued more, and eventually got far better results but not in this one.
[deleted] t1_j960ayb wrote
Reply to comment by [deleted] in [D] Is Google a language transformer like ChatGPT except without the G (Generative) part? by Lets_Gooo_123
[deleted]
Main_Mathematician77 t1_j95zx77 wrote
Reply to [D] Is Google a language transformer like ChatGPT except without the G (Generative) part? by Lets_Gooo_123
No really it’s more of a colony of LLM and indexes
[deleted] t1_j95zhr1 wrote
[deleted] t1_j95z7vq wrote
Reply to comment by milleniumsentry in [D] Lack of influence in modern AI by I_like_sources
[removed]
[deleted] t1_j95xti7 wrote
Reply to comment by berryaroberry in [D] Quality of posts in this sub going down by MurlocXYZ
[deleted]
PhoibusApollo t1_j95wxu7 wrote
Check this paper out: Factorized VAEs for Modeling Audience Reactions
dancingnightly t1_j95wa9s wrote
Reply to comment by RideOrDieRemember in [D] Things you wish you knew before you started training on the cloud? by I_will_delete_myself
Try multiple regions and zones. There are peaks and troughs in availability, most notably the weekend is a good time to spot. There are some sites that help you do this / scripts online that use the aws cli to check for you.
currentscurrents t1_j96n0v8 wrote
Reply to comment by stringerbell50 in [D] what are some open problems in computer vision currently? by Fabulous-Let-822
Isn't that doing pretty good these days? CNNs can not only segment, but even semantically label every pixel in an image.
On a practical level, I have used Photoshop's new object select and love it. It does a better job at masking than I do.