Recent comments in /f/MachineLearning
shaner92 t1_j7kfyfj wrote
You should be thinking about what's most widely used. What will your coworkers be able to collaborate in? Where will you be able to get the most support (forums, tutorials, even libraries)? This should be the only thing that matters for your first language, and in this case its clearly Python.
I think people spend way too much time worrying about the 'best', which makes sense because its a lot of work to learn your first language. It gets easier to switch though so better to just jump into the easiest and most supported.
aicharades OP t1_j7kfx3o wrote
visarga t1_j7kfvtf wrote
Try to put the data into GPT-3 and hope it knows the artists. I enjoyed its music recommendations a few times.
Cherubin0 t1_j7kfvps wrote
Reply to [N] Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement by Wiskkey
There are enough open culture images out there, images from greedy corporations are not needed anymore.
helliun t1_j7kfuh1 wrote
What library do you use for summarization?
Nhabls t1_j7kf73c wrote
Reply to comment by techie0007 in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
???
aicharades OP t1_j7kf4or wrote
Here's how it summarizes big documents:
Map Reduce
This method involves an initial prompt on each chunk of data (for summarization tasks, this could be a summary of that chunk; for question-answering tasks, it could be an answer based solely on that chunk). Then a different prompt is run to combine all the initial outputs. This is implemented in the LangChain as the MapReduceDocumentsChain.
Pros: Can scale to larger documents (and more documents) than StuffDocumentsChain. The calls to the LLM on individual documents are independent and can therefore be parallelized.
Cons: Requires many more calls to the LLM than StuffDocumentsChain. Loses some information during the final combining call.
Here's how a separate Wikipedia agent chain works:
Agents use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user.
It uses agent based modeling, basically asking itself a series of questions until it gets to the right answer. sorta looks like wikipedia racing.
Here's a sample LangChain agent based React-Act modeling on Wikipedia/TMDB:
Log
Entering new AgentExecutor chain...
Thought 1: I need to search for a romantic comedy film about a Senatorial candidate falling for a hotel maid, thinking she is a socialite, when he sees her trying on a wealthy woman's dress.
Action 1: Search[romantic comedy film Senatorial candidate hotel maid socialite wealthy woman's dress]
Observation 1: Maid in Manhattan is a 2002 American romantic comedy-drama film directed by Wayne Wang and based on a story by John Hughes, who is credited using a pseudonym. It stars Jennifer Lopez, Ralph Fiennes, and Natasha Richardson. In the film, a hotel maid and a high-profile politician fall in love. The film was released on December 13, 2002, by Columbia Pictures and was a box office success, grossing $154 million against its $55 million budget, while receiving mixed reviews.
Thought 2: Maid in Manhattan is a romantic comedy film about a Senatorial candidate falling for a hotel maid, thinking she is a socialite, when he sees her trying on a wealthy woman's dress.
Action 2: Finish[Maid in Manhattan]
From: https://langchain.readthedocs.io/en/latest/modules/agents/getting_started.html
Here's the sample notebook for the wikipedia agent: https://langchain.readthedocs.io/en/latest/modules/agents/implementations/react.html
f10101 t1_j7kea8m wrote
I always find it curious that lot of these "godfathers of AI" seem to be a bit like this. It gets draining to listen to them, as they have a tendency to reframe any debate or definition just so they can be right.
aicharades OP t1_j7ke5cl wrote
Reply to comment by Old-Radish1611 in [P] ChatGPT without size limits: upload any pdf and apply any prompt to it by aicharades
Try mine out at www.wrotescan.com! You can use the site for free if you pay for the API call by providing a a temporary OpenAI key. I wanted to share the tech with a demo. Remember to delete the key you used after its temp use.
When you sign up for OpenAI, you get $18 of free credits.
You can also create it locally using LangChain
Old-Radish1611 t1_j7kdwo5 wrote
This is the 2nd tool I've seen for this today, the first being this https://www.chatbase.co/ The creator there hasn't revealed the inner workings yet though like you have, I wonder how they compare?
perspectiveiskey t1_j7kd6ee wrote
Reply to [Project] I used a new ML algo called "AnimeSR" to restore the Cowboy Bebop movie and up rez it to full 4K. Here's a link to the end result - honestly think it looks amazing! (Video and Model link in post) by VR_Angel
This is why AI was created. I think we can call it now.
Jokes aside, thank you for doing this. It looks fantastic.
Appropriate_Fish_451 t1_j7kcwef wrote
Reply to [D] Should I focus on python or C++? by NoSleep19
TI Basic is the language of the future.
Followed closely by Pascal.
And Sanskrit, Latin, Ancient Mayan.
aicharades OP t1_j7kcj3i wrote
Reply to comment by mamaBiskothu in [P] ChatGPT without size limits: upload any pdf and apply any prompt to it by aicharades
This is a demo of an open source library that allows you to build your own chatgpt with the Completions API. Map reduce allows for memory and agent based simulation over much larger context windows.
https://langchain.readthedocs.io/en/latest/modules/memory/examples/chatgpt_clone.html
2blazen t1_j7kc4t6 wrote
Definitely Python, that's what all major companies support too. However it's not the byte code cache that makes a difference but the fact that machine learning libraries are written in C++ so you're not sacrificing performance by scripting in it.
These kind of questions are more suitable on r/learndatascience though
techie0007 t1_j7kc2si wrote
mamaBiskothu t1_j7kbrwm wrote
Wait why are you continuously calling it chatGPT if it’s not using chatGPT? In an ML sub to boost?
Randomramman t1_j7kb7qb wrote
Reply to comment by currentscurrents in Wouldn’t it be a good idea to bring a more energy efficient language into the ML world to reduce the insane costs a bit?[D] by thedarklord176
LOL spit my coffee out
Ill-Poet-3298 t1_j7kap8n wrote
Reply to comment by user4517proton in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
Google is afraid to kill their ad business, so they're letting others pass them by. Classic business mistake. There are apparently a lot of Google stans going around telling everyone how Google invented AI, etc, but it really looks like they got caught flat footed on this one.
[deleted] t1_j7kan77 wrote
Nhabls t1_j7kaa5a wrote
Reply to comment by techie0007 in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
ChatGPT hasn't really "shipped" either. It's out free because they feel hemorrhaging millions per month is an okay cost for the research and PR they're getting out of it. it's not viable in the slightest
Nhabls t1_j7ka1k3 wrote
Reply to comment by bballerkt7 in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
open ai will not offer it for free either
[deleted] t1_j7k8m9i wrote
Reply to comment by ReasonablyBadass in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
[removed]
Mescallan t1_j7k8i30 wrote
Reply to comment by thiseye in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
chatGPT isn't actually free right now, everyone just gets $18 of credits, which is far more than what anyone would actually use in chatGPT, but if you are fine tuning or analyzing bigger data sets you can burn through it pretty quick
visarga t1_j7kg6qu wrote
Reply to comment by junetwentyfirst2020 in Does the high dimensionality of AI systems that model the real world tell us something about the abstract space of ideas? [D] by Frumpagumpus
Architecture and model are much more intertwined in brains.