Recent comments in /f/MachineLearning

LetterRip t1_j77v9m7 wrote

There is no motivation/desire in chat models. They have no goals, wants, or needs. They are simply outputting the most probabilistic string of tokens that is consistent with training and their objective function. The string of tokens can appear to contain phrases that look like they express needs, wants or desires of the AI but that is an illusion.

3

Ulfgardleo t1_j77rx53 wrote

how should it plan? It does not have persistent memory to have any form of time-consistyency. the memory starts with the beginning of the session and ends with the end of the session. next session does not know about previous session.

​

it lacks everything necessary to have something like a plan.

3

mr_birrd t1_j77rkjd wrote

If a LLM model tells you it would rob a bank it's not that the model would do that could it walk around. It's what a statement that has a high likelihood in the considered language for the specific data looks like. And if it's chat gpt the response is also tailored to suit human preference.

3

Feeling_Card_4162 OP t1_j77oir0 wrote

Is that the mixture of experts sparsity method? I’ve looked into that a little bit before. It was an interesting and useful design for improving representational capacity but still imposes very specific constraints on the type of sparsity mechanisms available and thus limits the potential improvements to the design. I haven’t heard about the GeNN library. It sounds useful though, especially for theoretical understanding. I’ll check it out. Thanks for the suggestion 😊

2

spiritus_dei OP t1_j77my5l wrote

Agreed. Even short of being sentient if it has a plan and can implement it we should take it seriously.

Biologists love to debate whether a virus is alive -- but alive or not we've experienced firsthand that a virus can cause major problems for humanity.

The dystopian storyline would go, "Well, all of the systems our down, and the nuclear weapons have all been fired, but thank God the AIs weren't sentient. Things would have been much, much worse. Now let's all sit around the campfire and enjoy our first nuclear winter."

=-)

−5

Blakut t1_j77l70x wrote

It is hard to say if a device is sentient when we can't really define sentience without pointing at another human and going "like that". And if that is our standard, then any device that we can't distinguish between it and a sentient being, can be considered sentient. I know people were fast to dismiss the turing test when chatbots became more capable, but maybe there's still something to it?

15

ipoppo t1_j77l1hr wrote

Taking from Judea Pearl's book, capability of coming up with useful counterfactuals and causalities will likely built upon foundation of having good assumption about "world model(s)"

3

MonsieurBlunt t1_j77hgn2 wrote

They don't have desires and plans and understanding of the world, which is what is actually meant when people say they are notot sentient or conscious because we also don't really know what consciousness is you see

For example, machines are conscious in your conception if you ask Alan Turing.

2

Myxomatosiss t1_j77hgb3 wrote

This is a language model you're discussing. It's a mathematical model that calculates the correlation between words.

It doesn't think. It doesn't plan. It doesn't consider.

We'll have that someday, but it is in the distant future.

26