Recent comments in /f/MachineLearning

currentscurrents t1_j8c51f0 wrote

...and getting radically improved performance across several important tasks because of calling those APIs.

Plus, calling APIs is very important for integration into real systems because they can trigger real-world actions. Imagine a Siri that calls a bunch of different APIs based on complex instructions you give it.

30

Remarkable_Ad9528 t1_j8bxx1t wrote

I've used React-Speech before in a project to test mental-math arithmetic. For example my project would show a card with an addition/subtraction or multiplication/division problem, and the user's job was to speak the answer outloud. Using this library I was able to capture the user's answer as text and could check whether or not they got it correct. Would something like this work for whatever you're trying to do?

2

diviludicrum t1_j8bxeji wrote

I still think u/belacscole is right - this is analogical to the rudimentary use of tools, which can be done by some higher primates and a small handful of other animals. Tool use requires a sufficient degree of critical thinking to recognise a problem exists and select the appropriate tool for solving it. If done with recursive feedback, this would lead to increasingly skilful tool selection and use over time, resulting in better detection and solution of problems over time. Of course, if a problem cannot possibly be solved with the tools available, no matter how refined their usage is, that problem would never be overcome this way - humans have faced these sorts of technocultural chokepoints repeatedly throughout our history. These problems require the development of new tools.

So the next step in furthering the process is abstraction, which takes intelligence from critical thinking to creative thinking. If a tool-capable AI can be trained on a dataset that links diverse problems with the models that solve those problems and the process that developed those models, such that it can attempt to create and then implement new tools to solve novel problems, then assess its own success (likely via supervised learning, at least at first), we may be able to equip it with the “tool for making tools”, such that it can solve the set of all AI-solvable problems (given enough time and resources).

41

Remarkable_Ad9528 t1_j8bxagx wrote

I publish a newsletter weekdays at 6:30 AM EST called GPTRoad.

It's not ML-powered yet, but its geared towards SWE interested in ML. Every letter has info about new research that was published, tooling, and different libraries (langchain, gpt-index, pinecone, promptify, etc.) It also covers general news updates. It's short (should take ~ 3 min to read, its bullet-point formatted.

I'm a SWE (former Amazonian) interested in building projects that use AI, so I figured I should version control all my research for other SWEs as they onboard into the new era. I have about 100 subs right now.

1

sunbunnyprime t1_j8bpqov wrote

Most ML scientists aren’t actually fluent in the application of the algorithms they use. They have superficial understanding, they’re slow and buggy programmers, write slow code, spend months working on models that should take a few days to put together, overindex on hyperparam selection and tuning, playing with new algorithms, and don’t know how to validate their models and end up deploying garbage that often is literally no better than a coin flip. But they’re great at convincing people that they’re right on the cusp of solving a really big problem and adding a ton of value which buys them enough time to fart around for a few years and then get another job with a 30% raise and then do it all over again.

−2

daking999 t1_j8bnr9q wrote

Completely agree. I use reddit casually and twitter as more of a work/research tool, but I really much prefer reddit to twitter as a platform (especially post Musk). I tried getting into mastodon but it just feels like more awkward-to-use twitter. An academic focused ML subreddit might be good. Maybe even enforce "real" names for users to post?

14