Recent comments in /f/MachineLearning
AdamAlexanderRies t1_j8cahbg wrote
Reply to comment by daking999 in [D] Quality of posts in this sub going down by MurlocXYZ
What about a public discord server that only allows actual researchers to post, but allows everyone to view? Easy with roles.
MrAcurite t1_j8c9u48 wrote
Reply to comment by daking999 in [D] Quality of posts in this sub going down by MurlocXYZ
I joined the Sigmoid Mastodon. It's a wasteland of people posting AI "art," pseudo-intellectual gibberish about AI, and nonsense that belongs on the worst parts of LinkedIn.
currentscurrents t1_j8c51f0 wrote
Reply to comment by TheRealMichaelScoot in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
...and getting radically improved performance across several important tasks because of calling those APIs.
Plus, calling APIs is very important for integration into real systems because they can trigger real-world actions. Imagine a Siri that calls a bunch of different APIs based on complex instructions you give it.
techie0007 t1_j8c3mun wrote
Reply to comment by Nhabls in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
Has it not shipped yet buddy? https://www.reddit.com/r/ChatGPT/comments/110r2j4/ive_been_accepted_to_full_version_of_bingai_any/
piman01 t1_j8c35wp wrote
Reply to [D] Quality of posts in this sub going down by MurlocXYZ
It's because the name of this sub is a buzz word. Would be much fewer of these posts if it were called something like statistical learning
gruevy OP t1_j8c2o6f wrote
Reply to comment by Remarkable_Ad9528 in [D] Locally-runnable text to speech AI? by gruevy
Probably not, I want it to read long form text such as fiction. Tortoise TTS worked out pretty well but holy crap is it slow
JackBlemming t1_j8c2llw wrote
Reply to [R] DIGIFACE-1M — synthetic dataset with one million images for face recognition by t0ns0fph0t0ns
Everyone involved in this project is being paid too much.
TheRealMichaelScoot t1_j8c17ym wrote
Reply to [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
This is a bs paper. Simply calling APIs
BenjaminJamesBush t1_j8c12it wrote
Reply to comment by bballerkt7 in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Technically this has always been true.
aDutchofMuch t1_j8c0gcn wrote
Reply to [D] Quality of posts in this sub going down by MurlocXYZ
My post earlier today on DigiFace discussing usages was just removed by the mods for literally no reason. Maybe the discussion is going down hill because of too much oversight
Other-Economist8538 t1_j8byi4v wrote
Reply to comment by _poisonedrationality in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
It uses ChatGPT not GPT. It makes the same API call that you make in https://chat.openai.com/chat site. This project is forked from this repo, and you can check the code.
​
Embarrassed_Ride_896 t1_j8byciz wrote
Reply to [D] Quality of posts in this sub going down by MurlocXYZ
Bubble started. Everyone who got laid off will be wanting to be ai experts
Other-Economist8538 t1_j8by6ld wrote
Reply to comment by lanky_cowriter in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
No, if you visit a paper detail page(for example, https://arxiv.org/abs/2302.04818), it embeds a section and ChatGPT will start writing. Check the screenshot in the web store page again.
Remarkable_Ad9528 t1_j8bxx1t wrote
Reply to [D] Locally-runnable text to speech AI? by gruevy
I've used React-Speech before in a project to test mental-math arithmetic. For example my project would show a card with an addition/subtraction or multiplication/division problem, and the user's job was to speak the answer outloud. Using this library I was able to capture the user's answer as text and could check whether or not they got it correct. Would something like this work for whatever you're trying to do?
diviludicrum t1_j8bxeji wrote
Reply to comment by big_gondola in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
I still think u/belacscole is right - this is analogical to the rudimentary use of tools, which can be done by some higher primates and a small handful of other animals. Tool use requires a sufficient degree of critical thinking to recognise a problem exists and select the appropriate tool for solving it. If done with recursive feedback, this would lead to increasingly skilful tool selection and use over time, resulting in better detection and solution of problems over time. Of course, if a problem cannot possibly be solved with the tools available, no matter how refined their usage is, that problem would never be overcome this way - humans have faced these sorts of technocultural chokepoints repeatedly throughout our history. These problems require the development of new tools.
So the next step in furthering the process is abstraction, which takes intelligence from critical thinking to creative thinking. If a tool-capable AI can be trained on a dataset that links diverse problems with the models that solve those problems and the process that developed those models, such that it can attempt to create and then implement new tools to solve novel problems, then assess its own success (likely via supervised learning, at least at first), we may be able to equip it with the “tool for making tools”, such that it can solve the set of all AI-solvable problems (given enough time and resources).
Remarkable_Ad9528 t1_j8bxagx wrote
I publish a newsletter weekdays at 6:30 AM EST called GPTRoad.
It's not ML-powered yet, but its geared towards SWE interested in ML. Every letter has info about new research that was published, tooling, and different libraries (langchain, gpt-index, pinecone, promptify, etc.) It also covers general news updates. It's short (should take ~ 3 min to read, its bullet-point formatted.
I'm a SWE (former Amazonian) interested in building projects that use AI, so I figured I should version control all my research for other SWEs as they onboard into the new era. I have about 100 subs right now.
sunbunnyprime t1_j8bpqov wrote
Reply to comment by themusicdude1997 in [D] Critique of statistics research from machine learning perspectives (and vice versa)? by fromnighttilldawn
Most ML scientists aren’t actually fluent in the application of the algorithms they use. They have superficial understanding, they’re slow and buggy programmers, write slow code, spend months working on models that should take a few days to put together, overindex on hyperparam selection and tuning, playing with new algorithms, and don’t know how to validate their models and end up deploying garbage that often is literally no better than a coin flip. But they’re great at convincing people that they’re right on the cusp of solving a really big problem and adding a ton of value which buys them enough time to fart around for a few years and then get another job with a 30% raise and then do it all over again.
Soundwave_47 t1_j8bpaqd wrote
Reply to comment by sam__izdat in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Yes, please keep this sort of stuff in /r/futurology or something. We're here trying to formalize the n steps needed to even get to something that vaguely resembles AGI.
sunbunnyprime t1_j8bp6uz wrote
Reply to comment by JurgenSchmidthuber in [D] Critique of statistics research from machine learning perspectives (and vice versa)? by fromnighttilldawn
I’m a principal machine learning scientist at a very well known company and I’m also a kaggle master. You’re reading a lot into a few words I crapped out in a reddit comment.
Remarkable_Ad9528 t1_j8boasb wrote
Reply to [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Isn't this what Bing is doing out of the box? Same with the browser Opera (they're releasing a new feature called the "Shorten" button which internally calls OpenAI. I'd expect Google to release this as part of Chrome as well.
daking999 t1_j8bnr9q wrote
Reply to comment by MurlocXYZ in [D] Quality of posts in this sub going down by MurlocXYZ
Completely agree. I use reddit casually and twitter as more of a work/research tool, but I really much prefer reddit to twitter as a platform (especially post Musk). I tried getting into mastodon but it just feels like more awkward-to-use twitter. An academic focused ML subreddit might be good. Maybe even enforce "real" names for users to post?
sam__izdat t1_j8bn58f wrote
Reply to comment by mycall in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
I don't want to be that guy, but can y'all leave the doe-eyed ML mysticism to the more Ray Kurzweil themed subreddits?
Myxomatosiss t1_j8bllfu wrote
Reply to [D] Quality of posts in this sub going down by MurlocXYZ
"How many years before ChatGPT takes control of the global nuclear arsenal and demands the destruction of all humans?"
RunCodeCook t1_j8bjty1 wrote
Experiment tracking (weights and biases, mlflow, neptune, etc…)
Organizing research papers (zotero, paperpile, etc…)
vzq t1_j8cblxe wrote
Reply to comment by ferndoll6677 in [R] DIGIFACE-1M — synthetic dataset with one million images for face recognition by t0ns0fph0t0ns
Not a lot if you play optimally.
2log(1e6) is only about 20.