Recent comments in /f/Futurology

LaRanch t1_jdhvvr1 wrote

I think in theory that this is entirely plausible. In my opinion though, I think that the integration of something like this is highly unlikely. Especially when you take into consideration the organizations that have this kind of access operate with their own agenda, usually to increase profitability.

As of now, I can't foresee a benefit to any business to willingly challenge their own users belief systems, unless the intent is to switch the narrative in their own favor.

2

s1L3nCe_wb OP t1_jdhv7vh wrote

You are probably making those assumptions based on your experience with chatGPT but that's not what I tried to explain in my post.

The goal of the AI model I'm proposing is not to agree with the user but to question the user's ideas, beliefs and values, offer feedback regarding those views and encourage creative thinking to help the user come up with alternative viewpoints or even offer them if needed. At the same time, the user should be able to question the feedback or alternative views given by the AI.

In order to have a better understanding of what I'm proposing, I would highly recommend watching content or reading books that take this kind of approach to debates and other forms of exchanging ideas.

0

kallikalev t1_jdhj0tf wrote

We’re talking about direct computations. Someone with a massive memory of pi has it memorized, they aren’t computing it via an infinite series in the moment.

The point being made is that it’s much more efficient, both in time and energy, in having the actual computation done by a dedicated and optimized program that only takes a few CPU instructions, rather than trying to approximate it using the giant neural network mind that is a LLM. And this is similar to humans, our brains burn way more energy multiplying large numbers in our head than a CPU would in the few nanoseconds it would take.

7

DauntingPrawn t1_jdhi42a wrote

Complex cognition exists independent of language structures and LLMs mimic language structures, not cognition. You can destroy the language centers of the brain and general intelligence, ie cognition and self recognition, remain intact. Meanwhile ChatGPT isn't thinking or even imitating thought, it's imitating language by computing a latent space for emergent words based on prior language input. Math.

Meanwhile a baby can act on knowledge learned by observing the world long before language emerges. AGI requires more than language, more than memory. It requires the ability to model reality and learn language from raw sensory input alone, and to synthesize information and observation into new ideas, and motives to act on that information, the ability to predict an outcome and a value scale to weigh one potential outcome over another. A baby can do that but ChatGPT doesn't even know when it's spouting utter nonsense and stable diffusion doesn't know how many fingers a human has.

We have no ways of modeling unobserved information. A LLM cannot add a new word to it's model. It will never talk about anything that was invented after its training. Yes, they are impressive. On the level of parlor tricks and street magic.

1

theglandcanyon OP t1_jdhbj6p wrote

I also posted this question on r/asimov, and one of the comments indicated that it had been generated by ChatGPT. That answer included a plot description of a different short story by Asimov that had been embelleshed to include the stuff about predicting Shakespeare's next word.

Is that where you got the plot description you posted?

1