Recent comments in /f/MachineLearning
terath t1_j77unze wrote
Reply to comment by EmbarrassedHelp in [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
Why can't they just block the EU ip address blocks and put a disclaimer that this is not authorized for download in the EU?
spiritus_dei OP t1_j77u2ic wrote
Reply to comment by sarabjeet_singh in [D] Are large language models dangerous? by spiritus_dei
That might be why RLHF (reinforcement learning by human feedback) is ultimately doomed to fail.
sarabjeet_singh t1_j77t9zi wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
In the end, this technology is going to be a reflection of human history. That’s not a pretty thoughts. They’re literally modelled on us.
Ulfgardleo t1_j77rx53 wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
how should it plan? It does not have persistent memory to have any form of time-consistyency. the memory starts with the beginning of the session and ends with the end of the session. next session does not know about previous session.
​
it lacks everything necessary to have something like a plan.
mr_birrd t1_j77rkjd wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
If a LLM model tells you it would rob a bank it's not that the model would do that could it walk around. It's what a statement that has a high likelihood in the considered language for the specific data looks like. And if it's chat gpt the response is also tailored to suit human preference.
Ulfgardleo t1_j77ribp wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
a virus acts on its own. it has mechanics to interact with the real world.
visheratin t1_j77qzs9 wrote
Reply to 15 years old and bad at math [D] by Daniel_C_____
ninjawick t1_j77qshr wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
I dosent has control, that's the answer you are looking for
[deleted] t1_j77q426 wrote
Reply to 15 years old and bad at math [D] by Daniel_C_____
[removed]
Feeling_Card_4162 OP t1_j77oir0 wrote
Reply to comment by dancingnightly in [R] Topologically evolving new self-modifying multi-task learning algorithms by Feeling_Card_4162
Is that the mixture of experts sparsity method? I’ve looked into that a little bit before. It was an interesting and useful design for improving representational capacity but still imposes very specific constraints on the type of sparsity mechanisms available and thus limits the potential improvements to the design. I haven’t heard about the GeNN library. It sounds useful though, especially for theoretical understanding. I’ll check it out. Thanks for the suggestion 😊
[deleted] t1_j77obu6 wrote
Reply to [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
[removed]
Blakut t1_j77o7gg wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
i don't think a simple piece of code can be dangerous, and probably not a lot of systems will be integrated with AI anytime soon. The problem is the piece of code in the hands of humans can become dangerous.
spiritus_dei OP t1_j77my5l wrote
Reply to comment by Blakut in [D] Are large language models dangerous? by spiritus_dei
Agreed. Even short of being sentient if it has a plan and can implement it we should take it seriously.
Biologists love to debate whether a virus is alive -- but alive or not we've experienced firsthand that a virus can cause major problems for humanity.
The dystopian storyline would go, "Well, all of the systems our down, and the nuclear weapons have all been fired, but thank God the AIs weren't sentient. Things would have been much, much worse. Now let's all sit around the campfire and enjoy our first nuclear winter."
=-)
[deleted] t1_j77m4g6 wrote
Blakut t1_j77l70x wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
It is hard to say if a device is sentient when we can't really define sentience without pointing at another human and going "like that". And if that is our standard, then any device that we can't distinguish between it and a sentient being, can be considered sentient. I know people were fast to dismiss the turing test when chatbots became more capable, but maybe there's still something to it?
ipoppo t1_j77l1hr wrote
Reply to comment by ThirdMover in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
Taking from Judea Pearl's book, capability of coming up with useful counterfactuals and causalities will likely built upon foundation of having good assumption about "world model(s)"
spiritus_dei OP t1_j77kkcu wrote
Reply to comment by Myxomatosiss in [D] Are large language models dangerous? by spiritus_dei
Sounds a lot like COVID-19. Was that dangerous?
po-handz t1_j77hp58 wrote
Reply to comment by EmbarrassedHelp in [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
Yeah that's why Europe sucks. Hasn't been a competitive place for innovation since the 1800s
MonsieurBlunt t1_j77hgn2 wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
They don't have desires and plans and understanding of the world, which is what is actually meant when people say they are notot sentient or conscious because we also don't really know what consciousness is you see
For example, machines are conscious in your conception if you ask Alan Turing.
Myxomatosiss t1_j77hgb3 wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
This is a language model you're discussing. It's a mathematical model that calculates the correlation between words.
It doesn't think. It doesn't plan. It doesn't consider.
We'll have that someday, but it is in the distant future.
[deleted] t1_j77f0f2 wrote
Reply to comment by MjrK in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
[removed]
[deleted] t1_j77e3ku wrote
CatalyzeX_code_bot t1_j77cq5i wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
Found relevant code at https://github.com/anthropics/evals + all code implementations here
--
To opt out from receiving code links, DM me
based_goats t1_j77cejz wrote
Reply to comment by jimmymvp in [D] Normalizing Flows in 2023? by wellfriedbeans
Here's one using GANs, so not using an explicit likelihood: https://arxiv.org/abs/2203.06481
Here's a workshop paper applying score-based models: https://arxiv.org/abs/2209.14249
LetterRip t1_j77v9m7 wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
There is no motivation/desire in chat models. They have no goals, wants, or needs. They are simply outputting the most probabilistic string of tokens that is consistent with training and their objective function. The string of tokens can appear to contain phrases that look like they express needs, wants or desires of the AI but that is an illusion.