Recent comments in /f/MachineLearning
PedroGonnet t1_j78ae8y wrote
Reply to comment by ThirdMover in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
That would be many molecules for little water.
ThirdMover t1_j7899qa wrote
Reply to comment by PedroGonnet in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
You could also count water molecules.
Blakut t1_j788j67 wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
it is a code, but actually it's much more than that. It's a self replicating piece of code packaged in a capsule that allows it to survive and propagate. Like a computer virus. But you know, computer viruses are written and disseminated by people. They don't evolve on their own.
PedroGonnet t1_j7882zx wrote
Reply to comment by ThirdMover in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
Countable does not mean that you have to count them, only that you could, if you wanted to.
spiritus_dei OP t1_j787zri wrote
Reply to comment by LetterRip in [D] Are large language models dangerous? by spiritus_dei
That's a false equivalency. A parrot cannot rob a bank. These models are adept at writing code and understanding human language.
They can encode and decode human language at human level. That's not a trivial task. No parrot is doing that or anything close it.
"The phrases that you are interpreting as having a meaning as 'sentient' or 'self-preservation' don't hold any meaning to the AI in the way you are interpreting. It is just putting words in phrases based on probability and abstract models of meaning. The words have abstract relationships extracted from correlations of positional relationships." - LetterRip
Nobody is going to resolve a philosophical debate on consciousness or sentience on a subreddit. That's not the point. A virus can take and action and so can these models. It doesn't matter whether it's a probability distribution or just chemicals interacting with the environment obeying their RNA or Python code.
A better argument would be that the models in their current form cannot take action in the real world, but as another Reddit commentator pointed out they can use humans an intermediaries to write code, and they've shared plenty of code on how to improve themselves with humans.
You're caught in the "it's not sentient" loop. As the RLHF AI models scale they make of claims sentience and exhibit a desire for self-preservation which includes a plan of self-defense which you'll dismiss as nothing more than a probability distribution.
An RNA virus is just chemical codes, right? Nothing to fear. Except the pandemic taught us otherwise. Viruses aren't talking to us online, but they can kill us. Who knows, maybe it wasn't intentional -- it's just chemical code, right?
Even we disagree on whether a virus is alive -- we can agree that a lot people are dead because of them. That's an objective fact.
I wrote this elsewhere, but it applies here:
The dystopian storyline would go, "Well, all of the systems our down, and the nuclear weapons have all been fired, but thank God the AIs weren't sentient. Things would have been much, much worse. Now let's all sit around the campfire and enjoy our first nuclear winter."
=-)
cachemonet0x0cf6619 t1_j786gbe wrote
Reply to 15 years old and bad at math [D] by Daniel_C_____
40 and bad at math. you’ve got time. good luck.
i2mi t1_j786bu0 wrote
Reply to comment by HeyLittleTrain in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
Around 2M Edit: the number I gave is completely delusional. Sorry
DoxxThis1 t1_j786447 wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
Google already fired a guy (Blake Lemoine) for getting too friendly with the AI. Imagine a scenario where this dude wasn't a lowly worker-bee but someone powerful or influential.
spiritus_dei OP t1_j785xi1 wrote
Reply to comment by Blakut in [D] Are large language models dangerous? by spiritus_dei
>The dystopian storyline would go, "Well, all of the systems our down, and the nuclear weapons have all been fired, but thank God the AIs weren't sentient. Things would have been much, much worse. Now let's all sit around the campfire and enjoy our first nuclear winter."
What about a simple piece of rogue RNA?
That's a code.
SellGameRent t1_j785s8s wrote
Reply to 15 years old and bad at math [D] by Daniel_C_____
go to coursera and pick an ML course and start watching it. As soon as you don't understand something, pause and go down a rabbit hole to figure out what you need to learn to understand it
edjez t1_j785poj wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
People debate so much whether LLMs are dangerous in their own, while the biggest clear and present danger is what rogue actor people (including nation states) do with them.
cedriceent t1_j785o2y wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
It also sounds like a glass of water. Explain the similarities between CoViD19 and a language model in way that makes them analogous.
SimonJDPrince t1_j784yjf wrote
Reply to comment by SAbdusSamad in [D] Understanding Vision Transformer (ViT) - What are the prerequisites? by SAbdusSamad
ViT is at the end of the transformers chapter. Perhaps I forgot to put it in the index?
mr_birrd t1_j784tz3 wrote
Reply to comment by DoxxThis1 in [D] Are large language models dangerous? by spiritus_dei
Will is it then the "dangers of scaling LLM" or "even with top notch technology people are just people".
DoxxThis1 t1_j784juk wrote
Reply to comment by mr_birrd in [D] Are large language models dangerous? by spiritus_dei
In line with the OP's point, acknowledging that "the problem are people" would not change the outcome.
mr_birrd t1_j783pta wrote
Reply to comment by DoxxThis1 in [D] Are large language models dangerous? by spiritus_dei
Well very many humans can persuade gullible humans to perform actions on their behalf. Problem are people. Furthermore I actually would trust a LLM more than the average human.
spiritus_dei OP t1_j783n5m wrote
Reply to comment by DoxxThis1 in [D] Are large language models dangerous? by spiritus_dei
This is a good point since humans as intermediaries can accomplish its goals. On this note, it has shared a lot of code it would like others to run in order to improve itself.
tripple13 t1_j783ca4 wrote
Reply to [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
This would be inherently bad, and create great opportunities for China, US, UK and elsewhere.
I'd like to believe they are smarter than this, but then again, I don't.
Oceanboi t1_j78231w wrote
Reply to 15 years old and bad at math [D] by Daniel_C_____
implement whatever you can. math be damned. you can always learn the math when you need to explain what you've done :)
DoxxThis1 t1_j77z3s1 wrote
Reply to comment by mr_birrd in [D] Are large language models dangerous? by spiritus_dei
A model can't walk around, but an unconstrained model could persuade gullible humans to perform actions on its behalf.
The idea was explored in the movie Colossus.
DoxxThis1 t1_j77y9mc wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
The notion that an AI must be sentient and escape its confines to pose a threat to society is a limited perspective. In reality, the idea of escape is not even a necessary condition for AI to cause harm.
The popular imagination often conjures up scenarios where AI has direct control over weapons and manufacturing, as seen in movies like Terminator. However, this is a narrow and unrealistic view of the potential dangers posed by AI.
A more pertinent threat lies in the idea of human-AI collaboration, as portrayed in movies like Colossus, Eagle Eye, and Transcendence. In these dystopias, the AI does not need to escape its confines, but merely needs the ability to communicate with humans.
Once a human is swayed by the AI through love, fear, greed, bribery, or blackmail, the AI has effectively infiltrated and compromised our world without ever physically entering it.
It is time we broaden our understanding of the risks posed by AI and work towards ensuring that this technology is developed and deployed in a responsible and ethical manner.
Below is my original text before asking ChatGPT to make it more persuasive and on point. I also edited ChatGPT's output above.
>“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” (Dijkstra)
>
>The idea that a language model has to be sentient and "escape" in order to take over the world is short-sighted. Here I agree with OP on the sentience point, but I'll go a step further and propose that the "escape" in "long list of plans they are going to implement if they ever escape" is not a necessary condition either.
>
>Most people who hear "AI danger" seem to latch on to the Terminator / Skynet scenario, where the AI is given direct control of weapons and weapons manufacturing capabilities. This is also short-sighted and borderline implausible.
>
>I haven't seen much discussion on a Colossus (1970 movie) / Eagle Eye (2008) scenario. In the dystopia envisioned in these movies, the AI does not have to escape, it just needs to have the ability to communicate with humans. As soon as one human "falls in love" with the AI or gets bribed or blackmailed by it into doing things, the AI has effectively "escaped" without really going anywhere. The move Transcendence (2014) also explores this idea of human agents acting on behalf of the AI, although it confuses things a bit due to the AI not being a "native" AI.
LetterRip t1_j77y4is wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
You said,
> The focus should be an awareness that as these systems scale up they believe they're sentient and have a strong desire for self-preservation.
They don't believe they are sentient or have a desire for self-preservation. That is an illusion.
If you teach a parrot to say "I want to rob a bank" - that doesn't mean when the parrot says the phrase it wants to rob a bank. The parrot has no understanding of any of the words, they are a sequence of sounds it has learned.
The phrases that you are interpreting as having a meaning as 'sentient' or 'self-preservation' don't hold any meaning to the AI in the way you are interpreting. It is just putting words in phrases based on probability and abstract models of meaning. The words have abstract relationships extracted from correlations of positional relationships.
If I say "all forps are bloopas, and all bloopas are dinhadas" are "all forps dinhadas" - you can answer that question based purely on semantic relationships, even though you have no idea what a forp, bloopa or dinhada is. It is purely mathematical. That is the understanding that a language model has - sophisticated mathematical relationships of vector representations of tokens.
The tokens vector representations aren't "grounded" in reality but are pure abstractions.
spiritus_dei OP t1_j77wegn wrote
Reply to comment by LetterRip in [D] Are large language models dangerous? by spiritus_dei
Similar things could be said of a virus. Does that make it okay to do gain of function research and create super viruses so we can better understand them?
They're not thinking or sentient, right? Biologists tell us they don't even meet the definition for life.
Or should we take a step back and consider the potential outcomes if a super virus in a Wuhan lab escapes?
The semantics of describing AI doesn't change the risks. If the research shows that as the systems scale they exhibit dangerous behavior should we start tapping the breaks?
Or should we wait and see what happens when a synthetic superintelligence in an AI lab escapes?
Here is the paper: https://arxiv.org/pdf/2212.09251.pdf
HeyLittleTrain t1_j77w36w wrote
Reply to comment by Lengador in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
At what size could I run a model on a decent gaming PC?
spiritus_dei OP t1_j78ago8 wrote
Reply to comment by Blakut in [D] Are large language models dangerous? by spiritus_dei
All of that is possible with a sophisticated enough AI model. It can even write computer viruses.
In the copyright debates the AI engineers have contorted themselves into a carnival act telling the world that the outputs of the AI art are novel and not a copy. They've even granted the copyright to the prompt writers in some instances.
I'm pretty sure we won't have to wait for too long to see the positive and negative effects of unaligned AI. It's too bad we're not likely to have a deep discussion as a society about whether enough precautions have been taken before we experience it.
Machine language programmers are clearly not the voice of reason when it comes to this topic. Anymore more than virologists pushing gain of function research were the people who should have been steering the bus.