Recent comments in /f/MachineLearning
These-Assignment-936 OP t1_j7t1t2v wrote
Reply to comment by nielsrolf in [D] Are there emergent abilities of image models? by These-Assignment-936
Wow that’s very cool!
logsinh t1_j7t1cna wrote
Reply to [D] Are there any AI model that I can use to improve very bad quality sound recording? Removing noise and improving overall quality by CeFurkan
If the recordings are not confidential, I can process them for you (because we are not ready to publish the model yet). If you prefer public model, this one is pretty good: https://huggingface.co/spaces/hshr/DeepFilterNet2
VectorSpaceModel t1_j7t0rvd wrote
2 minute papers on YT is the best resource ive ever come across
Dry-Feature113 t1_j7t04s6 wrote
Reply to [D] Are there any AI model that I can use to improve very bad quality sound recording? Removing noise and improving overall quality by CeFurkan
Can you upload a sample? Is it a bandwidth issue, a booming mic issue, cracks and pops issue...etc.
gunshoes t1_j7swxxu wrote
Reply to comment by zds-nlp in What are the best resources to stay up to date with latest news ? [D] by [deleted]
Want to highlight this. It sounds a bit trollish but "academic Twitter" is a great repository for current research. It's also a cesspool where academics try to engage in social media discourse...
nielsrolf t1_j7stpek wrote
Parti (https://parti.research.google/) showed that being able to spell is an emergent ability. That is the only one I know of, but others that I could imagine are learning compositionally (a blue box between a yellow sphere and a green box), but it's more likely that this is a data issue. Also working out of distribution (a green dog) is a potential candidate. Interesting question
Imaginary-General687 OP t1_j7stap4 wrote
Reply to comment by Phoneaccount25732 in [D] What do you think about this 16 week curriculum for existing software engineers who want to pursue AI and ML? by Imaginary-General687
oh my bad, i will post an updated roadmap
No_Dust_9578 t1_j7st93o wrote
Try to publish an idea, suddenly everyone beats you to it and you catch up that way.
Seankala t1_j7sssh9 wrote
Feedly. A great tool that curates news and other blogs/articles/social media via RSS.
theanswerisnt42 OP t1_j7ssq9y wrote
Reply to comment by katadh in [Discussion] Cognitive science inspired AI research by theanswerisnt42
Thanks for the suggestion! Out of curiosity, has there been any theoretical work comparing SNNs and ANNs to explore if there are any advantages of using them?
adijsad t1_j7ss72p wrote
Data science weekly newsletter is enough
currentscurrents t1_j7sri62 wrote
Reply to comment by katadh in [Discussion] Cognitive science inspired AI research by theanswerisnt42
SNN-ANN conversion is kludge - not only do you have to train an ANN first, it means your SNN is incapable of learning anything new.
Surrogate gradients are better! But they're still non-local and require backwards passes, which means you're missing out on the massive parallelization you could achieve with local learning rules on the right hardware.
Local learning is the dream, and would have benefits for ANNs too: you could train a single giant model distributed across an entire datacenter or even multiple datacenters over the internet. Quadrillion-parameter models would be technically feasible - I don't know what happens at that scale, but I'd sure love to find out.
keninsyd t1_j7sqycb wrote
Reply to comment by [deleted] in What are the best resources to stay up to date with latest news ? [D] by [deleted]
I believe that is a joke...
EyeSprout t1_j7sqjzc wrote
CNNs and some very early optimizations for them that used to be kind of useful but are no longer really needed anymore since our computers are now faster (like Gabor functions) are sort of inspired from neuroscience research. Attention mechanisms were also floating around for quite a bit in neuroscience in models of memory and retrieval before it was sort of streamlined and simplified into the form we see today.
In general, when things go from neuroscience to machine learning, it takes a lot of stripping down of things into the actually relevant and useful components before they become actually workable. Neuroscientists have lot of ideas for mechanisms, but not all of them are useful...
zds-nlp t1_j7sonqm wrote
Follow people relevant to your field. Some of them will surely be proactive
[deleted] OP t1_j7sngub wrote
Reply to comment by Electrical_Study_617 in What are the best resources to stay up to date with latest news ? [D] by [deleted]
[deleted]
gamerx88 t1_j7smwbb wrote
Reply to [Discussion] Is ChatGPT and/or OpenAI really the leader in the space? by wonderingandthinking
Leader in what space and what sense? Fundamental research? Innovation? Marketshare for LLM? Hype?
[deleted] OP t1_j7slbr7 wrote
Reply to comment by Electrical_Study_617 in What are the best resources to stay up to date with latest news ? [D] by [deleted]
[removed]
Electrical_Study_617 t1_j7sl8bi wrote
ChatGPT and bard
[deleted] t1_j7skdi7 wrote
Reply to comment by Nhabls in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
[removed]
[deleted] OP t1_j7sjocy wrote
[removed]
[deleted] t1_j7sfkv9 wrote
Reply to comment by [deleted] in [N] Microsoft announces new "next-generation" LLM, will be integrated with Bing and Edge by currentscurrents
[removed]
sonofmath t1_j7se4mx wrote
Reply to comment by mr_house7 in [D] List of RL Papers by C_l3b
Can't really speak for Hugging Face. It seems to touch on relatively advanced topics and challenging tasks. It certainly looks nice from a practitoner's side, which is very useful to learn the various tricks to make RL work.
Regarding Silver's course, it is a bit outdated indeed, but the focus is more on the basics of RL, whereas Levine focuses on deep RL and assumes a good understanding of the basics.
Now, there are some topics in Silver's course which are a bit outdated (e.g. TD(lambda) with eligibility traces or linear function approximation) which would be better replaced by other topics in more modern courses, typically DQN or AlphaGo (UCL has also a more recent series, which touches on Deep RL). But Silver's explainations are very instructive and is one of the best taught university courses I have seen (in general). I would for sure at least watch the first few lectures.
[deleted] t1_j7sbhu7 wrote
Reply to comment by VelveteenAmbush in [N] Microsoft announces new "next-generation" LLM, will be integrated with Bing and Edge by currentscurrents
What we DONT need is a censored chatgpt. Maybe if it had sliders or parental controls like a normal search engine. But there shouldn’t be a universal censorship like what they’re trying to do right now.
edjez t1_j7t9rp3 wrote
Reply to [D] Are there emergent abilities of image models? by These-Assignment-936
Another emergent capability - and this depends on the model architecture, for example I don’t think Stable Diffusion could have it, but Dalle does - is to generate written letters / “captions” that to us look like gibberish but actually correspond to internal language embeddings for real-world cluster of concepts.