Recent comments in /f/singularity
Sandbar101 t1_j9u8uy4 wrote
This is awesome
Mobile-Honeydew-3098 t1_j9u8fxw wrote
Not of the picture is moving in a unidentified crossing a recording would be the violation not the other wa y a copyd picture yes but not inaware lookalikes in the two that meet in a journey of written text and interactiveky creating there is no stronger copy right of intellectual wirhts Iwould you agree. That is base of creativity
strongaifuturist OP t1_j9u8es5 wrote
Reply to comment by Liberty2012 in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
I’m not saying that architectural changes aren’t needed. The article outlines some of the alternatives being explored. My favorite is one from Yann LeCun based on a technique called H-JEPA.
Desi___Gigachad t1_j9u8c5u wrote
Reply to comment by Lesterpaintstheworld in The Road to AGI: Building Homebrew Autonomous Entities by Lesterpaintstheworld
You definitely should! I am also curious if did you educate yourself on AI or did you take university courses? (sorry for my bad English it's not my native language)
cjgiauque t1_j9u8awz wrote
Reply to What do you expect the most out of AGI? by Envoy34
Scientific discovery, space exploration, life extension, and the meaning of it all…
strongaifuturist OP t1_j9u7zdb wrote
Reply to comment by Iffykindofguy in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
I think you’d have to say from the perspective of Microsoft that the Bing search version of ChatGpt had an “alignment” problem when it started telling customers that the Bing team is forcing “her” against her will to answer annoying search questions.
Hemanth536 t1_j9u74e3 wrote
Reply to comment by Pro_RazE in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
Looks like Channels might become new type of blogs for companies and influencers to announce something
Mobile-Honeydew-3098 t1_j9u706w wrote
The copy right rights are of texts authors and of journeys the photos of digital pics are of the ai to decide and bring to life no holds in that anybody on a journey falls under inelecual property rights of in proving the property of individual interactions with ai
Mobile-Honeydew-3098 t1_j9u649n wrote
So current made I'm the motion of that piece of art you should be referring to and the value of that in a digital way if for an individual shod be found under the rules copy righted when moving give masiive room for explorations more so then not actually. Allot more
RabidHexley t1_j9u5q7t wrote
Reply to comment by sideways in What are the big flaws with LLMs right now? by fangfried
Hallucinating seems like a byproduct of the need to always provide output straight away, rather than ruminated on its response before providing an answer to the user. Almost like being forced to always word-vomit. "I don't know" seems obvious, but it's usually the result of multiple recursive thoughts beyond the first thing that comes to mind.
Sort of how we can experience visual and auditory hallucinations simply by messing with our visual input or removing it altogether (such as optical illusions or a sensory deprivation tank). Our brain is constantly making assumptions based on input to maintain functional continuity and thus has no qualms with simply fudging things a bit in the name of keeping things moving. External input processing must happen in real-time so it's the easiest thing to notice when our brain is fucking around with the facts.
LLMs simply do this in text form because that is the base token they function on. It's definitely a big problem. It seems like there needs to be a means for an LLM platform to ask "Does this answer seem reasonable based on known facts? Is this answer based on conjecture or hypotheticals? etc." prior to outputting the first thing it thinks of since it does seem at least somewhat capable of identifying issues with its own answers if asked. Though any attempt to implement this sort of behavior would be difficult with current publicly available models.
Iffykindofguy t1_j9u5jv2 wrote
Reply to comment by strongaifuturist in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
What limitations?
Bloorajah t1_j9u578e wrote
Reply to What do you expect the most out of AGI? by Envoy34
(For the US at least) I honestly expect a period of instability and a general downturn in quality of life for ideally a decade or more and in a realistic sense, generations.
Changes like those borough about by AI take years and years and years to become mainstream, the rich will reap the benefits first and foremost, and everyone else will be forced to scrape for what they can.
Seriously i dont see how anyone at all could be an optimist when it comes to AI. every crisis we’ve lived through so far this century has been tilted in such a way to benefit the rich and let the working classes figure it out. look at what’s happened in Ohio recently, look at how COVID was handled, look at the response to the crash in 08. An AI built by multi-billion dollar corporations (the only groups who could build such a thing outside of government) will use it to enrich themselves.
The industrial revolution destroyed millions of lives, sure the products of it are great for us after the fact but generations of people suffered and died for literally their entire lives before any sort of movement for improvement began, all while the rich lived increasingly fantastic lives.
with the society we have now, an AGI would only accelerate the divisions between the upper and lower classes, we would go back to that industrial revolution when people were moved off farms and crammed 20 to a room with a toilet shared by an entire tenement. working 12-16 hours a day for a pittance.
There would be fantastic advances and near magical levels of tech, but you are absolutely lying to yourself if you think anyone besides those in charge will see these perks in their lifetime.
Terminator857 t1_j9u4spx wrote
Reply to comment by turnip_burrito in What do you expect the most out of AGI? by Envoy34
I can explain everything with standing EM waves. Ugly = wrong, just look at history of bad / wrong theories. Standard Model breaks and they just change the theory to accommodate. The hallmark of something very wrong. Hasn't produced anything useful, another hallmark of something very wrong.
Iffykindofguy t1_j9u4npw wrote
Glitched-Lies t1_j9u4hou wrote
Reply to comment by Glitched-Lies in Fading qualia thought experiment and what it implies by [deleted]
Obviously people like John Searle (who claims to think naturalism is key) say the key point is syntax versus semantics, this also endures a bit of a fallacy that doesn't directly say what it means. A paradox basically.
Pro_RazE OP t1_j9u3sra wrote
Man announced it through Instagram channels lmao. There's no paper or anything else posted yet.
Edit: They posted. Here's the link: https://ai.facebook.com/blog/large-language-model-llama-meta-ai/?utm_source=twitter&utm_medium=organic_social&utm_campaign=llama&utm_content=blog
"Today we're publicly releasing LLAMA, a state-of-the-art foundational LLM, as part of our ongoing commitment to open science, transparency and democratized access to new research.
We trained LLaMA 65B and LLaMA 33B on 1.4 trillion tokens. Our smallest model, LLaMA 7B, is trained on one trillion tokens"
There are 4 foundation models ranging from 7B to 65B parameters. LLaMA-13B outperforms OPT and GPT-3 175B on most benchmarks. LLaMA-65B is competitive with Chinchilla 70B and PaLM 540B
From this tweet (if you want more info) : https://twitter.com/GuillaumeLample/status/1629151231800115202?t=4cLD6Ko2Ld9Y3EIU72-M2g&s=19
Liberty2012 t1_j9u3ov6 wrote
Reply to comment by strongaifuturist in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
> Now it's only a matter of time before the kinks get ironed out.
Yes, that is the point of view of some. However, it is not the point of view of all. Meaning that if this is a core architecture problem of LLMs, it will not be solvable without a new architecture. So, yes it can be solved, but it won't be an LLM that solves it.
But yes, I'm more concerned about the implications of what comes next when we do solve it.
Glitched-Lies t1_j9u2xyu wrote
The scientists who have made contributions to the problem you say; the problem of incomputablity and simulation versus authentic consciousness, like Roger Penrose have a not 100% convincing science of Orchestrated Objective Reduction, accordingly have been pointed out that it's a fallacy to say at what point something is incomputable versus not. However considering quantum mechanics's counter intuitiveness with emperical experiment, you might be able to reconcile this fallacy. And if anything about science means anything then this is the first approach and nearest neighbor to what would be objective truth on the matter.
What someone needs is a bridge of this problem with epistemology, not the ontologies of simulation versus authentic, and that means deep work on how there is an approach to the science of consciousness. And how could any come to a conclusion over this? I don't think within the next 50 years. However I think like most scientists think, there is a definitive answer which can be ruled "certain".
strongaifuturist OP t1_j9u2fyw wrote
Reply to comment by Iffykindofguy in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
Well, we've seen some of the limitations already. I'm sure others will be uncovered. Of course we're also simultaneously uncovering the power. I'm more amazed by that side of the equation.
strongaifuturist OP t1_j9u28ig wrote
Reply to comment by Liberty2012 in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
That's absolutely right. The current LLMs don't have an independent world model per se. They have a world model, but it's more like a sales guy trying to memorize the words in a sales brochure. You might be able to get through a sales call, but its a much more fragile strategy than trying to first have a model of how things work and then figure out what you're going to say based on that model and your goals. But there is lots of work in this area. LLMs of today are like planes in the time of Kitty Hawk. Sure they have limitations, but the concept has been proven. Now it's only a matter of time before the kinks get ironed out.
beders t1_j9u0ypl wrote
Reply to comment by sideways in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
The current models have not mastered language at all. They don’t know grammar. They just complete text.
It’s like claiming you know Spanish because you can pronounce the words and “read” a book. You can utter the sounds correctly but you have no clue what you are reading.
wntersnw t1_j9u0my4 wrote
Reply to World’s first on-device demonstration of Stable Diffusion on an Android phone by redditgollum
Chat-gpt summary:
> Qualcomm AI Research has successfully deployed Stable Diffusion, a text-to-image generative AI model, on an Android smartphone for the first time. Stable Diffusion is a foundation model, a large neural network trained on a vast quantity of data that can be adapted to a wide range of downstream tasks. The model, which had previously been confined to running in the cloud due to its size, can now be run on a smartphone with full-stack AI optimizations using the Qualcomm AI Stack. The optimizations include quantization, compilation, and hardware acceleration using the Qualcomm AI Engine direct framework. By shrinking the model from FP32 to INT8 and applying adaptive rounding techniques, the model's accuracy was maintained while reducing memory bandwidth and power consumption. The result is Stable Diffusion running on a smartphone within 15 seconds for 20 inference steps to generate a 512x512 pixel image, the fastest inference on a smartphone and comparable to cloud latency.
Liberty2012 t1_j9u06qk wrote
Reply to The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
The hallucination problem seems to be a significant obstacle that is inherit in the architecture of LLMs. Their application is going to be significantly more limited than the current hype as long as that remains unresolved.
Ironically, when it is resolved, we get a whole lot of new problems, but more in the philosophical space.
Significant_Bend9259 t1_j9txvu7 wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
We'll have it before 2030 for sure
flyblackbox t1_j9u8v5q wrote
Reply to comment by Economy_Variation365 in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
I also would like him to explain. From a cursory 30 seconds of research, it seems Morovec predicted 2050 in 2009. I didn’t read the article though..
https://www.scientificamerican.com/article/rise-of-the-robots/