Recent comments in /f/MachineLearning
[deleted] t1_j8zldq2 wrote
Reply to comment by soviet69er in [D] Simple Questions Thread by AutoModerator
[deleted]
[deleted] t1_j8zl1hb wrote
Reply to comment by TheGamingPhoenix_000 in [D] Simple Questions Thread by AutoModerator
[deleted]
[deleted] t1_j8zkvsu wrote
Reply to comment by casino_alcohol in [D] Simple Questions Thread by AutoModerator
Great idea! Must be easy with some pytorch
[deleted] t1_j8zklmh wrote
Reply to comment by CostaCostaSol in [D] Simple Questions Thread by AutoModerator
[deleted]
[deleted] t1_j8zk66v wrote
Reply to comment by aCuRiOuSguuy in [D] Simple Questions Thread by AutoModerator
[deleted]
[deleted] t1_j8zk43n wrote
Reply to [D] Simple Questions Thread by AutoModerator
[deleted]
currentscurrents t1_j8zi84t wrote
Reply to comment by tornado28 in [D] What are the worst ethical considerations of large language models? by BronzeArcher
>Disruptive applications will take jobs. Customer service, content creation, journalism, and software engineering are all fields that may lose jobs as a result of large language models.
I don't wanna work though. I'm all for having robots do it.
currentscurrents t1_j8zh4aa wrote
Reply to comment by prehensile_dick in [D] What are the worst ethical considerations of large language models? by BronzeArcher
>scraping all kinds of copyrighted materials and then profiting off the models while the people doing all the labor are getting either nothing (for content generation)
Yeah, but these people won't be doing that labor anymore. Now that text-to-image models have learned how to draw, they don't need a constant stream of artists feeding them new art.
Now artists can now work at a higher level, creating ideas that they can render into images using the AI as a tool. They'll be able to create much larger and more complex projects, like a solo indie artist creating an entire anime.
>LLMs... barely have any legitimate use-cases
Well, one big use case: they make image generators possible. Those rely on embeddings from language models, which are a sort of neural representation of the ideas behind the text. It grants the other network the ability to work with plain english.
Right now embeddings are mostly used to guide generation (across many fields, not just images) and semantic search. But they are useful for communicating with a neural network performing any task, and my guess is that the long-term impact of LLMs will be that computers will understand plain english now.
zbyte64 t1_j8zfbi0 wrote
Write a bot to handle all HR complaints and train it on the latest managerial materials. Then as a bonus the bot will look at all the conversations and propose metrics for increased efficiency and harmony at the work place.
athos45678 t1_j8zewjb wrote
Reply to comment by kau_mad in [N] Google is increasing the price of every Colab Pro tier by 10X! Pro is 95 Euro and Pro+ is 433 Euro per month! Without notifying users! by FreePenalties
It’s 29 cents a gig per month over the storage limit, and i rarely go over the storage limit if i am carefully managing files. Definitely the biggest drawback though. You can always just use wkentaro’s gdrive package to pull from google drive as well
Diligent_Ad_9060 t1_j8zdoh6 wrote
Reply to comment by prehensile_dick in [D] What are the worst ethical considerations of large language models? by BronzeArcher
Thank you for sharing. I'll have a look
prehensile_dick t1_j8zdhy9 wrote
Reply to comment by Diligent_Ad_9060 in [D] What are the worst ethical considerations of large language models? by BronzeArcher
Not specifically about that suit, but the Legal Eagle episode about copyright and AI was really interesting. The relevant part starts at 5:03
baffo32 t1_j8zd3ge wrote
Reply to comment by drinkingsomuchcoffee in [D] HuggingFace considered harmful to the community. /rant by drinkingsomuchcoffee
You’re not the bad guy, I’m guessing maybe it’s a community of data workers who’ve never had a reason to value DRY.
tornado28 t1_j8zcrc2 wrote
People will use them to make money in unethical and disruptive ways. An example of an unethical way to use them is phishing scams. Instead of sending out the same phishing email to thousands of people, scammers may get some data about people and then use the language model to write personalized phishing emails that have a much higher success rate.
Disruptive applications will take jobs. Customer service, content creation, journalism, and software engineering are all fields that may lose jobs as a result of large language models.
The other disruptive possibility is that LLMs will be able to themselves rapidly build more powerful LLMs. I use GitHub copilot every day and it's already very good at writing code. It takes at least 25% off the time it takes me to complete a software implementation task. So it's very possible a LLM could in the near future make improvements to it's own training script and use it to train an even more powerful LLM. This could lead to a singularity where we have extremely rapid technological development. It's not clear to me what the fate of humankind would be in this case.
Diligent_Ad_9060 t1_j8zc02u wrote
Reply to comment by prehensile_dick in [D] What are the worst ethical considerations of large language models? by BronzeArcher
I'd be very interested in hearing someone having more insight into Free Software Foundation and their process against copilot
baffo32 t1_j8zbuup wrote
Reply to comment by baffo32 in [D] HuggingFace considered harmful to the community. /rant by drinkingsomuchcoffee
i think by centralized they mean what they imagine dry looking like, putting code in one place rather than spreading it out. it’s not usually used that way. it’s a reasonable expression though; people usually centralize components so there is one organized place to go to in order to access them.
[deleted] t1_j8zbtif wrote
Reply to comment by NotARedditUser3 in [D] What are the worst ethical considerations of large language models? by BronzeArcher
[removed]
baffo32 t1_j8zbmua wrote
Reply to comment by hpstring in [D] HuggingFace considered harmful to the community. /rant by drinkingsomuchcoffee
dry is a very basic software engineering principle that means to include only one copy of every sequence of code. it looks like machine learning people did not learn this as they weren’t trained as software engineers. DRY stands for “don’t repeat yourself”, and if not respected then it gets harder and slower more and more to maintain, improve, or bugfix software, the larger and older it gets.
danielfm123 t1_j8zb3m1 wrote
Reply to [N] Google is increasing the price of every Colab Pro tier by 10X! Pro is 95 Euro and Pro+ is 433 Euro per month! Without notifying users! by FreePenalties
Preconfigured environment has a cost, learn Linux and do it your self.
prehensile_dick t1_j8zan8s wrote
Reply to comment by BronzeArcher in [D] What are the worst ethical considerations of large language models? by BronzeArcher
I feel like the ethical issues pertaining to bias and toxic content can be (and are being) worked on. The collection of the training data and attribution problem seem more intractable and companies are already being sued for that.
buzzbuzzimafuzz t1_j8zafoo wrote
The mess that has been Bing Chat/Sydney, but instead of just verbally threatening users, it's connected with APIs that let it take arbitrary actions on the internet to carry out them out.
I really don't want to see what happens if you connect a deranged language model like Sydney with a competent version of Adept AI's action transformer to let it use a web browser.
drinkingsomuchcoffee OP t1_j8zael9 wrote
Reply to comment by baffo32 in [D] HuggingFace considered harmful to the community. /rant by drinkingsomuchcoffee
I am the "bad guy" of the thread, so anything I say will be seen negatively, even if it's correct. This is typical human behavior, unfortunately.
I have a feeling most people here do not understand DRY done well, and are used to confusing inheritance hierarchies and incredibly deep function chains. Essentially they have conflated DRY with bad code, simple as that.
BronzeArcher OP t1_j8z83z8 wrote
Reply to comment by prehensile_dick in [D] What are the worst ethical considerations of large language models? by BronzeArcher
These are what I feel like are the most standard topics. Valuable, nonetheless.
BronzeArcher OP t1_j8z7yuo wrote
Reply to comment by mocny-chlapik in [D] What are the worst ethical considerations of large language models? by BronzeArcher
As in they wouldn’t interpret it responsibly? What exactly is the concern related to them not understanding?
[deleted] t1_j8zlk6k wrote
Reply to comment by Tyson1405 in [D] Simple Questions Thread by AutoModerator
Google colab gives you fast stuff for free. I trained yolo in a few minutes