Recent comments in /f/MachineLearning
aicharades OP t1_j84a2g3 wrote
Reply to comment by ShermanTSE in [P] ChatGPT without size limits: upload any pdf and apply any prompt to it by aicharades
That must be a big document! I may need to adjust the code for more caching
Reddit1990 t1_j848225 wrote
Reply to [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
... isn't that the point of the summary at the start of a paper?
Reddit1990 t1_j84815i wrote
Reply to [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
... isn't that the point of the summary at the start of a paper?
CeFurkan OP t1_j8466db wrote
Reply to comment by sid_reacher in [D] Best available text to speech free AI model out there for english by CeFurkan
I have been spending time with Tortoise TTS since yesterday
couldn't produce my voice yet but i am understanding :/
it is also super slow - damn slow on rtx 3060 - cuda running
ShermanTSE t1_j844jex wrote
Reply to comment by aicharades in [P] ChatGPT without size limits: upload any pdf and apply any prompt to it by aicharades
I also get a RateLimitError recently when I tried
ShermanTSE t1_j843jvs wrote
Reply to comment by aicharades in [P] ChatGPT without size limits: upload any pdf and apply any prompt to it by aicharades
I couldn't find map page in wrotescan. Can you highlight to me how can I get it to work?
niclas_wue t1_j842pyo wrote
Reply to [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Hey, great idea, looks very interesting. Do you use the abstract as an input or do you actually parse the paper? I built something quite similar: http://www.arxiv-summary.com which summarizes trending AI papers as bullet points. However, I think a chrome extension allows for a much more flexible paper choice, which is really great.
Trakeen t1_j841ust wrote
Reply to [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
I really like chatgpt but i typically find the abstract good enough to summarize the paper
[deleted] t1_j841k34 wrote
Reply to comment by cajmorgans in [D] Is it legal to use images or videos with copyright to train a model? by Tlaloc-Es
[deleted]
cajmorgans t1_j8416i1 wrote
Even if it will become illegal, the democracy of Machine Learning depends on it being legal. If Getty wins this, it would mean that a few pretty large companies would be the only ones that can build large models because they “own” most of the data. Facebook for example does a lot of stuff to prevent people scrape public data from their apps.
svd- t1_j840c7q wrote
Reply to comment by _sshin_ in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Genuinely interested in learning how to build such things, can you explain how did you build this extension and linked it to chat gpt from system design perspective.
wideEyedPupil t1_j83ywpd wrote
Reply to [Project] I used a new ML algo called "AnimeSR" to restore the Cowboy Bebop movie and up rez it to full 4K. Here's a link to the end result - honestly think it looks amazing! (Video and Model link in post) by VR_Angel
would like to see a sample of the source movie file for res and for artifacts of compression process. it looks impressive anyhow.
jimmymvp t1_j83v503 wrote
Reply to comment by AdFew4357 in [D] Critique of statistics research from machine learning perspectives (and vice versa)? by fromnighttilldawn
I'm not sure if you have a good overview of ML research if this is your claim. Sounds like you've read too many blog posts on transformers. I'd suggest going through some conference proceedings to get a good overview, there's some pretty rigorous (not just stats) stuff out there. I agree though that there is a substantial subset of research in ML that works towards tweaking and pushing the boundaries of what can be achieved with existing methods, which is for me personally exciting to see! A lot of cool stuff came out of scaling up and tweaking the architectures.
_sshin_ OP t1_j83t0er wrote
Reply to [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
https://chrome.google.com/webstore/detail/arxivgpt/fbbfpcjhnnklhmncjickdipdlhoddjoh
To use this extension, simply install it and visit a link to an arXived paper. It will generate a summary of the paper, including a one sentence summary, 3-5 questions for the authors, and 3-5 suggestions for related topics. The query prompt can be customized to fit your specific needs and preferences
mr_birrd t1_j83q197 wrote
Reply to comment by ZestyData in [D] What ML or ML-powered projects are you currently building? by TikkunCreation
Also cause often it's s damn rabbit hole and just as you finish, nvidia comes up with the same thing just 10 times faster.
tetelestia_ t1_j83o6nh wrote
Google's API is pretty cheap. Might be free depending on how much you need.
thiru_2718 t1_j83kkbh wrote
Reply to [D] Transformers for poker bot by lmtog
Poker depends on looking far enough ahead to be able to play game theory optimal (GTO) moves that maximize the expected value over a long run of hands. You can train a transformer on a ton of data, and get it to predict context-specific plays, but if the number of possible decision-branches is growing exponentially, is this enough?
But honestly, I don't know much about these types of RL-type problems. How is AlphaGo structured?
bacocololo t1_j83izz9 wrote
TaxSuspicious8708 t1_j83hqiw wrote
I'm working in game dev, and on side project I'm currently building a little (newbie) ml framework in c# to discover FFNN, CNN and probly RNN. I'm currently struggling on the backpropagation in convolutionnal layer but that's a matter of time before it works (I hope) 😂
I'm very curious to see the possible applications in game AI. I already did some testing projects before, simple ML agents with small fully connected networks.. But I want to go further and probly try to mix the utility based ai pattern with reinforcement learning methods or genetic algorithm.
I also think Convolutionnal network could maybe be used to input some 'spatialized' data to an ai agent and allow him to take decision about movement or so..
sid_reacher t1_j83gv0g wrote
I am curious about this too!
goj-145 t1_j83801h wrote
Reply to comment by 2blazen in [D] Is it legal to use images or videos with copyright to train a model? by Tlaloc-Es
It would have been MUCH harder to prove if they spent a day preprocessing the images first!
2blazen t1_j8378vr wrote
Reply to comment by goj-145 in [D] Is it legal to use images or videos with copyright to train a model? by Tlaloc-Es
So you're saying Stability wouldn't have issues if they hired an intern to git clone a watermark remover and put the images through it first?
londons_explorer t1_j835xx0 wrote
Reply to comment by Insecure--Login in [D] Are there emergent abilities of image models? by These-Assignment-936
You search the training image database for pictures of koalas with wine glasses... And there won't be many examples in there, and you check each one.
goj-145 t1_j831dqg wrote
Reply to comment by Miguel33Angel in [D] Is it legal to use images or videos with copyright to train a model? by Tlaloc-Es
The question is can you use copyrighted info to train a model. The answer is we don't know yet.
The current lawsuit that will define precedent on this is for image generation using copyrighted Getty images in a training model. It's proven that Getty images are used because the watermark shows up in the output of the model many times which is the answer to "how can they prove it".
Once that is defined, then we will know if it is legal or not in those jurisdictions. And then we will get to the "do we do it anyways even though it's illegal?"
Iunaml t1_j84cwot wrote
Reply to comment by Trakeen in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Good enough to know if I have to read it or not. Still ends up disappointed half of the time because an abstract is meant is often a bit clickbaity.