Recent comments in /f/Futurology

Alternative_Log3012 t1_j9qoqm7 wrote

Machine learning researchers and engineers understand the structure of their models, just not what each individual weighting is (there can be millions or more) for each simulated neuron, as these are found by a training process (which again is something known to the creator or team of creators) using known information.

The above process can literally be achieved by a complex calculator and is in no way deserving of ‘rights’

1

ActuatorMaterial2846 t1_j9qma3y wrote

I'm more convinced that we may never create an AI with sentience. An AI will likely always mimic it though.

However, I do think an AGI and ASI are inevitable. Sentience isn't required for such things to exist.

Such intellegence just has to be similar to the alphago or alphafold models, except capable of doing all human cognitive tasks at that level or higher, and needs to be able to operate autonomously.

There are organisms that behave like this in the world, albeit not intelligent as we consider it or even alive, but still incredibly complex, autonomous and adaptable.

1

CommentToBeDeleted t1_j9qluo8 wrote

>There isn’t any possibility of true consciousness from a computer.

Imagine admitting we don't' know what consciousness is and yet still being absolutely certain that you can distinguish when something is or is not conscious. As if applying the qualifier "true" changes anything about that. You want to know what drivel looks like, there you go...

​

>Actually assigning rights to a computer itself shows a poor understanding of what a computer is…

Really depends on what you definition of computer is here. If you are assuming a calculator, phone or desktop, then sure, I would grant you that. But to assume you have any idea how the "black box" works within machine learning algorithms demonstrates your gross misunderstanding of the topic at hand.

The actual people who build these "machines" do not fully understand the logic behind much of the decision making being made. That's the entire reason we utilize machine learning.

​

It's crazy just how little humility people show in regards to this subject. My entire argument is that we don't know enough and need to better understand this and people somehow manage to have the hubris to think this problem is already solved.

−2

BIGELLLOW t1_j9qlcx1 wrote

It's not that it's not understood, but not fully understood. For instance, you can know enough about gravity to be able to regularly predict the path of a thrown ball or to figure out how much thrust is needed for orbit without fully knowing how gravity is "communicated" over the vastness of space.

Enough is known about quantum entanglement for us to build computers using the phenomenon, even if there are still plenty of things about quantum physics we still don't fully comprehend.

1

Alternative_Log3012 t1_j9qkndj wrote

None of this (absolute drivel) is a good argument for giving robots ‘rights’.

There isn’t any possibility of true consciousness from a computer.

At most, if robots are created somewhat anthropomorphically, regulate how humans interact with them publically so as not to outrage common decency (ie not make other humans uncomfortable).

Actually assigning rights to a computer itself shows a poor understanding of what a computer is…

3