Recent comments in /f/MachineLearning
LetterRip t1_j8dpgxc wrote
Reply to comment by diviludicrum in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
There are plenty of examples of tool use in nature that don't require intelligence. For instance ants,
https://link.springer.com/article/10.1007/s00040-022-00855-7
The tool use being demonstrated by toolformer can be purely statistical in nature, no need for intelligence.
chhaya_35 OP t1_j8do922 wrote
Reply to comment by dafroon in [D] What are resources to start with GNN and GraphML? by chhaya_35
Thanks for the concern. Not entering the field because of ChatGPT. I have been in the field before the all the hype. I had simply moved to MLoPs and edge AI side of things in order to explore new items
Rieux_n_Tarrou t1_j8do5l9 wrote
Reply to comment by beautifoolstupid in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Oh ok I was not aware of this.
Thank u for the context
MrAcurite t1_j8dnscj wrote
Reply to comment by daking999 in [D] Quality of posts in this sub going down by MurlocXYZ
I get that. I've come to actively hate a lot of the big, visual, attention-grabbing work that comes out of labs like OpenAI, FAIR, and to some extent Stanford and Berkeley. I work more in the trenches, on stuff like efficiency, but Two Minute Papers is never going to feature a paper just because it has an interesting graph or two. Such is life.
daking999 t1_j8dn7ar wrote
Reply to comment by MrAcurite in [D] Quality of posts in this sub going down by MurlocXYZ
It's also frustrating finding researchers that I want to follow. I work on ML/compbio so the ppl I want to follow are spread across multiple mastodon servers which makes them hard to search for.
aadityaura t1_j8dn5zl wrote
Check Promptify for LLM https://github.com/promptslab/Promptify
daking999 t1_j8dmw8r wrote
Reply to comment by AdamAlexanderRies in [D] Quality of posts in this sub going down by MurlocXYZ
I haven't used discord but heard good things about it, even with some labs using it instead of slack.
dafroon t1_j8dm77f wrote
Machine learning isn’t for everyone. I know chatgpt seems simple but it’s not. Don’t enter a field that you don’t have a passion for just because of it.
MurlocXYZ OP t1_j8dknrw wrote
Reply to comment by throwaway2676 in [D] Quality of posts in this sub going down by MurlocXYZ
I have been filtering by Hot, so my experience has been quite different. I guess I should filter by Top more.
jishhd t1_j8djlmd wrote
Reply to comment by yashdes in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
That's basically what they talk about in this video you may find interesting: https://youtu.be/wYGbY811oMo
TL;DW: Discusses ChatGPT+WolframAlpha integration where the language model knows when to call out to external APIs to answer questions, such as precise mathematics.
You can try it out here by pasting your own API key: https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain
BashsIash t1_j8djkk4 wrote
Reply to comment by EducationalCicada in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Can it be impossible? I'd assume it can't be impossible, otherwise we couldn't be intelligent in the first place.
GFrings t1_j8dixv3 wrote
Reply to [D] Is a non-SOTA paper still good to publish if it has an interesting method that does have strong improvements over baselines (read text for more context)? Are there good examples of this kind of work being published? by orangelord234
In general, absolutely yes. In practice, the review process for most tier 1 and 2 conferences right now is a complete roll of the dice. For example, WACV and some other conferences explicitly state in their reviewer guidelines that you should consider the novelty of the approach over the performance. But I still see many reviews that ping the work for lack of SOTAness. The best thing you can do is make your work as academically rigorous as possible (have good baseline experiments, ablation studies, analysis...) And submit until you get in. Don't worry about what you can't control, which is randomly being assigned to a dud reviewer.
throwaway2676 t1_j8digqj wrote
Reply to [D] Quality of posts in this sub going down by MurlocXYZ
Here are the top 10 posts on my front page right now:
>[R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research
>[D] Quality of posts in this sub going down
>[D] Is a non-SOTA paper still good to publish if it has an interesting method that does have strong improvements over baselines (read text for more context)? Are there good examples of this kind of work being published?
>[R] [N] pix2pix-zero - Zero-shot Image-to-Image Translation
>[P] Extracting Causal Chains from Text Using Language Models
>[R] [P] Adding Conditional Control to Text-to-Image Diffusion Models. "This paper presents ControlNet, an end-to-end neural network architecture that controls large image diffusion models (like Stable Diffusion) to learn task-specific input conditions." Example uses the Scribble ControlNet model.
>[R] [P] OpenAssistant is a fully open-source chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
>[D] What ML dev tools do you wish you'd discovered earlier?
>[R] CIFAR10 in <8 seconds on an A100 (new architecture!)
>[D] Engineering interviews at Anthropic AI?
From this list the only non-academic/"low quality" posts are the last one and this one. This is consistent with my normal experience, so I'm not really sure what you are talking about.
ilovethrills t1_j8dgpo1 wrote
Reply to comment by Remarkable_Ad9528 in [D] What ML or ML-powered projects are you currently building? by TikkunCreation
So do you work on projects also or just keep on updates in industry?
pyepyepie t1_j8dgah3 wrote
Reply to comment by belacscole in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Did it learn to master tools though? I see it more as a neuro-symbolic system (is it the correct term?). It happens a lot in production.
Pawngrubber t1_j8dg1pm wrote
Reply to comment by dustintran in [D] Quality of posts in this sub going down by MurlocXYZ
Where on Twitter? How should I get started?
uristmcderp t1_j8dg14x wrote
Reply to comment by daking999 in [D] Quality of posts in this sub going down by MurlocXYZ
If there are people willing to moderate with an iron fist, an academic focused subreddit can work well. An open forum always get derailed, real name or no.
[deleted] t1_j8df8cp wrote
[deleted] t1_j8de3rp wrote
Reply to comment by Varpie in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
[deleted]
codename_failure t1_j8dc6h2 wrote
Reply to [D] Quality of posts in this sub going down by MurlocXYZ
The only solution would be to create /r/AcademicMachineLearning to discuss papers there, and to leave this subreddit for the general public.
Cherubin0 t1_j8dbz88 wrote
Reply to comment by Rhannmah in [D] Can Google sue OpenAI for using the Transformer in their products? by t0t0t4t4
Yes the same as owning trade routes. If you don't want others to use it then don't publish it or don't invest in the first place. Leave the market to good people that don't feel the need to restrict other human's freedoms.
Cherubin0 t1_j8dbny1 wrote
Reply to comment by cantfindaname2take in [D] Can Google sue OpenAI for using the Transformer in their products? by t0t0t4t4
Or they should not do r&d if they cannot accept others people's human rights to use their brains whatever they like. This is like saying a thief had so much effort he should be allowed to keep the stolen good
uristmcderp t1_j8db0gw wrote
Reply to comment by diviludicrum in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
The whole assessing its own success is the bottleneck for most interesting problems. You can't have a feedback loop unless it can accurately evaluate if it's doing better or worse. This isn't a trivial problem either, since humans aren't all that great at using absolute metrics to describe quality, once past a minimum threshold.
Despacereal t1_j8d971u wrote
Reply to comment by belacscole in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
In a way yes. I think general intelligence (consciousness in most animals) developed evolutionarily to manage a wide variety of sensory inputs and tasks, and to bridge the gaps between them.
As we develop more individual areas of AI, we will naturally start to combine them to create more powerful programs, such as Toolformer combining the strengths of LLMs and other models. Once we have these connections between capabilities, it should be easier to develop new models that learn these connections more deeply and can do more things.
Some of the things that set us apart from other animals are our incredible language and reasoning capabilities which allow us to understand and interact with an increasingly complex world and augment our capabilities with tools. The perceived understanding that LLMs display using only patterns in text is insane. Combine that with the pace of developments in Chain of Thought reasoning, use of Tools, other areas handling visuals, sound, and motion, and multimodal AI, and the path to AGI is becoming clearer than the vision of a MrBeast™ cataracts patient.
thecodethinker t1_j8dpuru wrote
Reply to comment by LetterRip in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
It is purely statistical, isn’t it?
LLMs are statistical models after all.