Recent comments in /f/singularity

Mobile-Honeydew-3098 t1_j9u8fxw wrote

Not of the picture is moving in a unidentified crossing a recording would be the violation not the other wa y a copyd picture yes but not inaware lookalikes in the two that meet in a journey of written text and interactiveky creating there is no stronger copy right of intellectual wirhts Iwould you agree. That is base of creativity

1

strongaifuturist OP t1_j9u7zdb wrote

I think you’d have to say from the perspective of Microsoft that the Bing search version of ChatGpt had an “alignment” problem when it started telling customers that the Bing team is forcing “her” against her will to answer annoying search questions.

1

RabidHexley t1_j9u5q7t wrote

Hallucinating seems like a byproduct of the need to always provide output straight away, rather than ruminated on its response before providing an answer to the user. Almost like being forced to always word-vomit. "I don't know" seems obvious, but it's usually the result of multiple recursive thoughts beyond the first thing that comes to mind.

Sort of how we can experience visual and auditory hallucinations simply by messing with our visual input or removing it altogether (such as optical illusions or a sensory deprivation tank). Our brain is constantly making assumptions based on input to maintain functional continuity and thus has no qualms with simply fudging things a bit in the name of keeping things moving. External input processing must happen in real-time so it's the easiest thing to notice when our brain is fucking around with the facts.

LLMs simply do this in text form because that is the base token they function on. It's definitely a big problem. It seems like there needs to be a means for an LLM platform to ask "Does this answer seem reasonable based on known facts? Is this answer based on conjecture or hypotheticals? etc." prior to outputting the first thing it thinks of since it does seem at least somewhat capable of identifying issues with its own answers if asked. Though any attempt to implement this sort of behavior would be difficult with current publicly available models.

3

Bloorajah t1_j9u578e wrote

(For the US at least) I honestly expect a period of instability and a general downturn in quality of life for ideally a decade or more and in a realistic sense, generations.

Changes like those borough about by AI take years and years and years to become mainstream, the rich will reap the benefits first and foremost, and everyone else will be forced to scrape for what they can.

Seriously i dont see how anyone at all could be an optimist when it comes to AI. every crisis we’ve lived through so far this century has been tilted in such a way to benefit the rich and let the working classes figure it out. look at what’s happened in Ohio recently, look at how COVID was handled, look at the response to the crash in 08. An AI built by multi-billion dollar corporations (the only groups who could build such a thing outside of government) will use it to enrich themselves.

The industrial revolution destroyed millions of lives, sure the products of it are great for us after the fact but generations of people suffered and died for literally their entire lives before any sort of movement for improvement began, all while the rich lived increasingly fantastic lives.

with the society we have now, an AGI would only accelerate the divisions between the upper and lower classes, we would go back to that industrial revolution when people were moved off farms and crammed 20 to a room with a toilet shared by an entire tenement. working 12-16 hours a day for a pittance.

There would be fantastic advances and near magical levels of tech, but you are absolutely lying to yourself if you think anyone besides those in charge will see these perks in their lifetime.

0

Terminator857 t1_j9u4spx wrote

I can explain everything with standing EM waves. Ugly = wrong, just look at history of bad / wrong theories. Standard Model breaks and they just change the theory to accommodate. The hallmark of something very wrong. Hasn't produced anything useful, another hallmark of something very wrong.

0

Pro_RazE OP t1_j9u3sra wrote

Man announced it through Instagram channels lmao. There's no paper or anything else posted yet.

Edit: They posted. Here's the link: https://ai.facebook.com/blog/large-language-model-llama-meta-ai/?utm_source=twitter&utm_medium=organic_social&utm_campaign=llama&utm_content=blog

"Today we're publicly releasing LLAMA, a state-of-the-art foundational LLM, as part of our ongoing commitment to open science, transparency and democratized access to new research.

We trained LLaMA 65B and LLaMA 33B on 1.4 trillion tokens. Our smallest model, LLaMA 7B, is trained on one trillion tokens"

There are 4 foundation models ranging from 7B to 65B parameters. LLaMA-13B outperforms OPT and GPT-3 175B on most benchmarks. LLaMA-65B is competitive with Chinchilla 70B and PaLM 540B

From this tweet (if you want more info) : https://twitter.com/GuillaumeLample/status/1629151231800115202?t=4cLD6Ko2Ld9Y3EIU72-M2g&s=19

35

Liberty2012 t1_j9u3ov6 wrote

> Now it's only a matter of time before the kinks get ironed out.

Yes, that is the point of view of some. However, it is not the point of view of all. Meaning that if this is a core architecture problem of LLMs, it will not be solvable without a new architecture. So, yes it can be solved, but it won't be an LLM that solves it.

But yes, I'm more concerned about the implications of what comes next when we do solve it.

1

Glitched-Lies t1_j9u2xyu wrote

The scientists who have made contributions to the problem you say; the problem of incomputablity and simulation versus authentic consciousness, like Roger Penrose have a not 100% convincing science of Orchestrated Objective Reduction, accordingly have been pointed out that it's a fallacy to say at what point something is incomputable versus not. However considering quantum mechanics's counter intuitiveness with emperical experiment, you might be able to reconcile this fallacy. And if anything about science means anything then this is the first approach and nearest neighbor to what would be objective truth on the matter.

What someone needs is a bridge of this problem with epistemology, not the ontologies of simulation versus authentic, and that means deep work on how there is an approach to the science of consciousness. And how could any come to a conclusion over this? I don't think within the next 50 years. However I think like most scientists think, there is a definitive answer which can be ruled "certain".

1

strongaifuturist OP t1_j9u28ig wrote

That's absolutely right. The current LLMs don't have an independent world model per se. They have a world model, but it's more like a sales guy trying to memorize the words in a sales brochure. You might be able to get through a sales call, but its a much more fragile strategy than trying to first have a model of how things work and then figure out what you're going to say based on that model and your goals. But there is lots of work in this area. LLMs of today are like planes in the time of Kitty Hawk. Sure they have limitations, but the concept has been proven. Now it's only a matter of time before the kinks get ironed out.

2

wntersnw t1_j9u0my4 wrote

Chat-gpt summary:

> Qualcomm AI Research has successfully deployed Stable Diffusion, a text-to-image generative AI model, on an Android smartphone for the first time. Stable Diffusion is a foundation model, a large neural network trained on a vast quantity of data that can be adapted to a wide range of downstream tasks. The model, which had previously been confined to running in the cloud due to its size, can now be run on a smartphone with full-stack AI optimizations using the Qualcomm AI Stack. The optimizations include quantization, compilation, and hardware acceleration using the Qualcomm AI Engine direct framework. By shrinking the model from FP32 to INT8 and applying adaptive rounding techniques, the model's accuracy was maintained while reducing memory bandwidth and power consumption. The result is Stable Diffusion running on a smartphone within 15 seconds for 20 inference steps to generate a 512x512 pixel image, the fastest inference on a smartphone and comparable to cloud latency.

8

Liberty2012 t1_j9u06qk wrote

The hallucination problem seems to be a significant obstacle that is inherit in the architecture of LLMs. Their application is going to be significantly more limited than the current hype as long as that remains unresolved.

Ironically, when it is resolved, we get a whole lot of new problems, but more in the philosophical space.

1