Recent comments in /f/MachineLearning
jloverich t1_j86oj73 wrote
Reply to [D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280
Fortran with better syntax I think would do it. They'd probably have to go the way of carbon and support legacy fortran, but change many other things quite a bit. Still it has matrix operations similar to numpy, whereas, carbon still has matrices as second class citizens... Agreed that there should be a better language for this than Python.
wittfm t1_j86o0ks wrote
Reply to [D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280
I haven't heard about attempts but I remember seeing in Jeremy Howard's classes about his ideas on designing one, more towards a descriptive paradigm
BrotherAmazing t1_j86l5g3 wrote
Reply to comment by themusicdude1997 in [D] Critique of statistics research from machine learning perspectives (and vice versa)? by fromnighttilldawn
Right. I mean, most people suck at their jobs, period though so… 🤷🏼
BrotherAmazing t1_j86kxmq wrote
Reply to comment by Ulfgardleo in [D] Critique of statistics research from machine learning perspectives (and vice versa)? by fromnighttilldawn
You should say “…between pure mathematics and applied math” IMO. Nit-picky, yes, but more accurate.
EuphoricPenguin22 t1_j86kg2j wrote
Reply to comment by Rieux_n_Tarrou in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
There's a NPM package that provides an unofficial API for ChatGPT, but you have to jump through all of the hoops to get signed in before it can snag the necessary credentials.
eigenham t1_j86k7ly wrote
Reply to comment by impossiblefork in [D] Can Google sue OpenAI for using the Transformer in their products? by t0t0t4t4
There's also a question of whether a patent of this type will hold up in court. Anything reminiscent of a software patent is on shaky footing.
[deleted] t1_j86i8t9 wrote
[deleted]
Insecure--Login t1_j86i4gp wrote
Reply to comment by londons_explorer in [D] Are there emergent abilities of image models? by These-Assignment-936
You would have to search millions to billions of images manually; that sounds very expensive. And searching using a detection model is not accurate enough.
currentscurrents t1_j86gori wrote
Reply to The Inference Cost Of Search Disruption – Large Language Model Cost Analysis [D] by norcalnatv
In the long run, I think this is something that will be solved with more specialized architectures for running neural networks. TPUs and Tensor Cores are great first steps, but the Von Neumann architecture is holding us back.
Tensor Cores are very fast. But since the Von Neumann architecture has separate compute and memory connected by a bus, the entire network has to travel through the memory bus for every step of training or inference. The overwhelming majority of time is spent waiting on this:
>200 cycles (global memory) + 34 cycles (shared memory) + 1 cycle (Tensor Core) = 235 cycles.
A specialized architecture that physically implements neurons on silicon would no longer have this bottleneck. Since each neuron would be directly connected to the memory it needs (weights, data from previous layer) the entire network could run in parallel regardless of size. You could do inference as fast as you could shovel data through the network.
SweatyBicycle9758 t1_j86dh2e wrote
SweatyBicycle9758 t1_j86dchx wrote
Reply to comment by Mobile-Bird-6908 in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Waiting for that
impossiblefork t1_j86be2h wrote
The GPT family of models are a decoder-only architecture which is not covered by the patent.
jms4607 t1_j8693ex wrote
Reply to comment by Mobile-Bird-6908 in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
YoloV3 would shine
_sphinxfire t1_j868s22 wrote
Reply to [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Reminder: ChatGPT will routinely leave out aspects of information even if you are giving it the task of re-phrasing what you have said in a different style, if this information is deemed problematic in some way - and it will do this without even telling you.
This effect will also be present - probably even more pronounced - in summaries.
Mobile-Bird-6908 t1_j86708s wrote
Reply to comment by maxip89 in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
That is literally how Microsoft is planning to incorporate ChatGPT into Edge. You'll have a side bar where you can talk to ChatGPT about whatever content is displayed on your page.
Mobile-Bird-6908 t1_j866b6d wrote
Reply to comment by endless_sea_of_stars in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Let's start an academic journal named "Trashademia", where we only accept articles with click bait titles. If your research is otherwise not worthy of a publication, we will accept it anyways as long as the content is presented with plenty of humour and trash talk.
currentscurrents t1_j865is4 wrote
There's some mutually assured destruction going on here. Microsoft/OpenAI also own patents that cover Google's products. If Google sued them over Transformers, they would sue right back for something else.
Trakeen t1_j863a5t wrote
Reply to comment by A_Light_Spark in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
I think in this specific example it is because they didn’t do any experiments. Conclusion in the abstract is rather superfluous (do more research, ya think?)
lanky_cowriter t1_j85xpgw wrote
Reply to [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
I tried this extension (https://chrome.google.com/webstore/detail/arxivgpt/fbbfpcjhnnklhmncjickdipdlhoddjoh)
It didn't really work for me. It just opens a ChatGPT page in a small window.
Meddhouib10 t1_j85xi28 wrote
Reply to comment by nerfcarolina in [R] I made a mistake in a recent submission, what to do ? by [deleted]
Ok thanks !
nerfcarolina t1_j85wwg9 wrote
Reply to comment by Meddhouib10 in [R] I made a mistake in a recent submission, what to do ? by [deleted]
Reviewers/editors would be unlikely to reject a paper for something like that. If they catch it, they'll mention it in their review. Very few papers are accepted outright. If they're interested in your paper, they will ask you to respond to the reviews and submit a revised paper. At that point in time you can fix it.
is_it_fun t1_j85wuxs wrote
Reply to comment by Trakeen in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Right? I can write my own version that just gives you the abstract.
Meddhouib10 t1_j85werz wrote
Reply to comment by nerfcarolina in [R] I made a mistake in a recent submission, what to do ? by [deleted]
Yes the error is just in a figure that is meant to illustrate the pre-processing, it changes nothing about neither the arguments nor the conclusion. By revision, you mean after the paper is accepted ? ( it is my first paper so bare with me) I just fear that it will refused because of a silly mistake.
nerfcarolina t1_j85vuuz wrote
It's an error in an illustration that doesn't undermine the main findings? I'd probably wait for reviewer comments and fix it when you revise
__lawless t1_j86qlmc wrote
Reply to [D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280
Swift for tensorflow. Didn’t workout though