Recent comments in /f/MachineLearning
CeFurkan OP t1_j8mqni7 wrote
Reply to [D] Best AI tool, model or service to improve audio speech quality - not noise removal by CeFurkan
5 min speech example : http://sndup.net/gxrq
noise removed with NVIDIA broadcast
hogsta1 t1_j8mp2am wrote
Reply to comment by specializedboy in [D] Simple Questions Thread by AutoModerator
Hi, I found this reading group on GitHub (https://github.com/fulifeng/Causal_Reading_Group) that has loads of interesting papers in the area. I would also recommend the book 'The book of why' by Judea Pearl :)
Smooth-Stick-5751 OP t1_j8mj4y6 wrote
Reply to comment by canbooo in Reinforcement Learning based algorithms specifically for NLP[D][P] by Smooth-Stick-5751
Thank you very much for your valuable opinion, I will look into your suggestions.
canbooo t1_j8mfn6z wrote
Since you are waiting since 6h without any response, let me share my 5c. You are probably inspired by chatgpt and the success of HRL so why not start there: https://openreview.net/forum?id=20-xDadEYeU
But this idea is not novel, only its application to nlp. It has been applied to other stuff like games and autonomous driving. They use PPO, which is to me the most robust on-policy algorithm. However, any other on-policy algorithm could also have been used instead and stuff like SAC could improve sample efficiency but might run into convergence problems. Also, you can try to be more generalistic and try off-policy algorithms independent of a specific language model. This would allow using same experience/value model to fine tune other LMs. But it might require much much more data to achieve a similar performance. In any case, the application of RL to NLP (except for language based games) is quite new and many points remain yet to be answered.
soviet69er t1_j8mbzpm wrote
Reply to [D] Simple Questions Thread by AutoModerator
Hello! I`m currently a 2nd year data science student and I am into machine learning engineering as a career, and I`m wondering what skills should I learn on my own beside (python ml frameworks) and data engineering frameworks such as pyspark, I was considering to learn java but I am not sure if I am better off investing my time learning something else
Oripy t1_j8m8ejv wrote
Reply to [D] Simple Questions Thread by AutoModerator
I have a question related to the Actor Critic method described in the keras example here: https://keras.io/examples/rl/actor_critic_cartpole/
I looked at the code for the Train part, and I think I understand what all lines are supposed to do and why they are there. However, I don't think I understand what role the critic plays in the improvement of the agent. To me this critic is just a value that predicts the future reward, but I don't see this being fed back into the system for the agent to select a better action to improve its reward.
Do I have a good understanding? Is the critic just a "bonus" output? Are the two unrelated and the exact same performance could be achieved by removing the Critic output altogether? Or is the critic output used in any way to improve learning rate in a way I fail to see?
Thank you.
bushrod t1_j8m68xt wrote
This technique is similar to data augmentation, but with a specific focus on important samples. There may not be a specific name for this technique, but it could be considered a form of "strategic oversampling" or "strategic repetition" of important samples. By repeating these important samples in every batch, you are increasing their impact on the training process and potentially helping the neural network to converge to a better solution that takes these samples into account.
It's worth noting that this technique may not always be appropriate or necessary, and it could potentially lead to overfitting if not used carefully. However, in cases where there are a small number of important samples that have a disproportionate impact on the end application, repeating them in every batch can be a useful approach to ensure that the neural network learns to incorporate their information effectively.
:-P
cantfindaname2take t1_j8m5sry wrote
Mostly implementing change point detection algorithms that in some way utilize ordinal pattern analysis.
[deleted] t1_j8m5i9o wrote
Reply to comment by KarmaQueenOfficial in [D] Simple Questions Thread by AutoModerator
[removed]
VP4770 t1_j8m2vwj wrote
Reply to comment by pommedeterresautee in [P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl by pommedeterresautee
Interesting
pommedeterresautee OP t1_j8m2odq wrote
Reply to comment by VP4770 in [P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl by pommedeterresautee
Tks, after some search I found that it's a French practice to use mn (instead of min) and it tends to be replaced, even in France, by min.
For instance: https://www.larousse.fr/dictionnaires/francais/minute/51680
lwl t1_j8m2h7b wrote
specializedboy t1_j8lwdgu wrote
Reply to [D] Simple Questions Thread by AutoModerator
Does anyone know any study groups or any resources that targets towards learning causal inference in machine learning. I have recently started learning causal inference. Please ping me if any one interested to form a study group or something to learn.
SnooStories4137 t1_j8lrsug wrote
Reply to comment by belacscole in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Some reinforcement learning like algorithm seems like really interesting next step here. Observation = task (like qa or mask filling), actions = api call where the output updates the observation via concatenation as in the paper, environment is apis and database and python installation etc, state is network weights, reward is loss function before and after update to observation.
I feel like even if the only api is just generating text using itself to update the observation ('to help itself think') intuitively seems like it could help for some things. Rather than try to fill in the mask right away, it might recognize better to first 'think a little' to update its working memory (which is of course the observation here).
kandalete OP t1_j8lq86w wrote
Reply to comment by Litecoin_Messiah in [R] [P] LUCAS: LUng CAncer Screening dataset by kandalete
Thank you, I will try it
Litecoin_Messiah t1_j8lq20t wrote
maybe you can find it on https://www.kaggle.com/datasets/kmader/finding-lungs-in-ct-data
Ok-Variety-8135 t1_j8l9g5j wrote
Reply to [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
If we treat the output of transformer as inner monolog and only perform real output when it calls <action> say: something </action>.
It can speak proactively, and hiding their inner thought, just like human does.
VacuousWaffle t1_j8l2qh3 wrote
Reply to comment by EnjoyableGamer in [D] Quality of posts in this sub going down by MurlocXYZ
I wonder at what compute cost per model evaluation will the narrative about pushing for larger models will end.
VacuousWaffle t1_j8l262a wrote
Reply to comment by AdamAlexanderRies in [D] Quality of posts in this sub going down by MurlocXYZ
I just find that Discord is bad at being archived, and not indexed by search engines. It's kind of a mess of a walled garden, and even searching within it is kind of mediocre.
Maleficent_Stay_7737 OP t1_j8kuma8 wrote
Reply to comment by DLamikins in [R] Hitchhiker’s Guide to Super-Resolution: Introduction and Recent Advances by Maleficent_Stay_7737
You can also find the article on arxiv: https://arxiv.org/abs/2209.13131
Maleficent_Stay_7737 OP t1_j8ktr7x wrote
Reply to comment by super544 in [R] Hitchhiker’s Guide to Super-Resolution: Introduction and Recent Advances by Maleficent_Stay_7737
Not exactly. Both are formulated as inverse problem in image processing. Super-Resolution investigates the case where information is lost due to downscaling whereas deblurring focus on blurry input (e.g., by low pass filters). However, they have similar properties and deep learning based methods can be applied to both. In this survey, we didn't go deeper into the deblurring topic.
muntoo t1_j8kte9l wrote
Reply to comment by RoboticJan in [R] Hitchhiker’s Guide to Super-Resolution: Introduction and Recent Advances by Maleficent_Stay_7737
arXiv papers smell better.
super544 t1_j8kokxc wrote
Reply to [R] Hitchhiker’s Guide to Super-Resolution: Introduction and Recent Advances by Maleficent_Stay_7737
Does SR include deblurring?
Maleficent_Stay_7737 OP t1_j8kezw2 wrote
Reply to comment by tdgros in [R] Hitchhiker’s Guide to Super-Resolution: Introduction and Recent Advances by Maleficent_Stay_7737
Thank you very much for your comment. It is a very valuable and important note for the subject and community as this is a super important aspect of image SR. We refer to this topic under the Unsupervised SR section (8) but did not have the space to go into more detail, which doesn't mean it doesn't deserve attention. We referenced another survey by Liu et al. (“Blind image superresolution: A survey and beyond", https://arxiv.org/abs/2107.03055) from 2022 to fill this gap (also mentions KernelGAN and related methods), which we find is an informative source for blind SR in general.
U03B1Q t1_j8mu4q6 wrote
Reply to [D] Best AI tool, model or service to improve audio speech quality - not noise removal by CeFurkan
I'm aware of a Neurips paper that talks about audio super resolution - Paper and Codebase.
You can see some indicative samples on the author's website https://kuleshov.github.io/audio-super-res/