Recent comments in /f/singularity
KyleG t1_j9w6d9i wrote
Reply to comment by Deadboy00 in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
Actually independent creation is a defense against a copyright claim. That means if you can prove that you use an AI to generate the art with your own prompt you would win against someone suing you for infringement because that's an independent creation. It is patent law where independent creation is not a defense.
[deleted] OP t1_j9w62zm wrote
Reply to comment by Glitched-Lies in Fading qualia thought experiment and what it implies by [deleted]
If the Penrose situation turns out to be correct, then attempting to replace quantum neurons using classical neurons will cause the system to crash.
KyleG t1_j9w6130 wrote
Reply to comment by qrayons in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
Actually independent creation doesn't violate copyright. That's a patent doctrine, which covers practical inventions, not creative expression.
MysteryInc152 t1_j9w5xvg wrote
Reply to comment by TinyBurbz in What are the big flaws with LLMs right now? by fangfried
Far as I know they've just said it's a much better model than GPT 3.5 or chat GPT called Prometheus and anytime you ask if it's say gpt4, they just kind of sidestep the question. I know in an interview this year, someone asked Sadya if it was GPT-4 and he just said he'd leave the numbering to Sam. They're just being weirdly cryptic I think.
blueSGL t1_j9w5qm1 wrote
Reply to comment by ActuatorMaterial2846 in Open AI officially talking about the coming AGI and superintelligence. by alfredo70000
it's the AI effect writ large
> "AI is anything that has not been done yet."
Lesterpaintstheworld OP t1_j9w529n wrote
Reply to comment by MrTacobeans in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
The engine to generate token can be changed at any moment. Actually I'm looking forward to being a able to plug it on GPT 3.5 / 4. Also it could be replaced by an open-source counterpart, I am just not aware of any at the moment.
I think no one really knows we're AGI will emerge from. But even having an agent that can be an helpful assistant, even without the "AGI" part, would be quite the success for me. Business applications are numerous
[deleted] OP t1_j9w4hw3 wrote
[deleted]
Lesterpaintstheworld OP t1_j9w4h1h wrote
Reply to comment by DamienLasseur in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
Sure, feel free to reach out! No training required on my side, I'm only leveraging existing API. I even did not require fine-tuning yet, although that might come
helpskinissues t1_j9w47cr wrote
This subreddit lacks imagination as well, it's mostly fanboying OpenAI because chatGPT was the first, criticizing Google, Meta, etc... And being all the time "product over research!!!".
Fortunately some smart people wander these lands.
I am genuinely surprised that people are discovering AI now (in this community) when movies like Ex Machina or even Terminator were done ages ago.
Yes, AI will surpass humans. Yes, we will use technology to enhance our lives and intelligence. Human species without enhancements won't be productive to work in any job in a matter of decades.
Etc etc. It's all known.
But for example I don't see people in this subreddit acknowledging Waymo (or Cruise or even Tesla self driving AI). Waymo is directly changing people's lives and removing taxi jobs right now using an AI system able to ride as good as humans, and nobody gives a damn fuck, we all talk about a chatbot that can write Edgar Allan Poe poems nobody care about. Obviously biased.
Today's release of Llama is one of the most impressive feats in the last months, we'll see major effects of such event in through the year.
sticky_symbols t1_j9w3kze wrote
Reply to comment by SurroundSwimming3494 in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
The thing about chatGPT is that everyone talked about it and tried it. I and most ML folks hadn't tried GPT3
Everyone I know of was pretty shocked at how good GPT3 is. It did change timelines in the folks I know of, including the ones that think about timelines a lot as part of their jobs.
sticky_symbols t1_j9w3cav wrote
Reply to comment by DukkyDrake in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
A lot of people who think about this a lot think it does. LLMs seem like they may play an important role in creating genuine general intelligence. But of course they would need many additions.
Shiyayori t1_j9w39q1 wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
If an AGI has a suitable ability to extrapolate the results of its actions into the future, and we use some kind of reinforcement learning to train it on numerous contexts it should avoid, the it’ll naturally develop an internal representation of all contexts it ought to avoid (which will only ever be as good as it’s training data and ability to generalise across it). Anyway, it’ll recognise that the results of its actions will naturally lead to a context that should be avoided.
I imagine this is similar to how humans do it, and it’s a lot more vague with us. We match our experience to our internal model of what’s wrong and create a metric to determine just how wrong it is, and then we compare that metric against our goals and make a decision based on weather or not we believe this risk metric is too high.
I think the problem might mostly be in finding a balance between solidifying it’s current moral beliefs vs keeping them liquid enough to change and optimise. Our brains are pretty similar in that they become more rigid over time, and stochastically decreasing techniques are often used in optimisation problems
The solution might be in having a corpus of agents developing their own models with a master model that compares each of their usefulness’ against the input data and their rates of stochastic decline.
Or maybe I’m just talking out my ass, who actually knows.
TheSecretAgenda t1_j9w37cn wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
We had a good run, AI may be better for the planet in the long run. Maybe the AIs will keep us on a reservation like in Australia or something.
ActuatorMaterial2846 t1_j9w2p6b wrote
>Beware the snake oil. They have impressive ML (“Machine Learning”) models built/trained from content, algorithms, and neural networks. That is not “AI” and it is not “AGI”. Beware the snake oil. Remember what it actually is. Don’t fall for the hucksters and word games. twitter.com/cccalum/status…
These comments annoy me. Of course it's AI in every definition of the term.
When you see someone say this, they are simply a denialist refusing to look at objective reality. You could beat someone like this in the head with objective truth and they would deny it with each blow. I will never understand such close minded dogmatic attitudes.
Mortal-Region t1_j9w2oaj wrote
Reply to comment by Cryptizard in Optimism in the Singularity in face of the Fermi-Paradox by [deleted]
But would it explain us being so early within the timeline of the first civilization?
Additional-Escape498 t1_j9w2ix6 wrote
Reply to comment by FpRhGf in What are the big flaws with LLMs right now? by fangfried
LLM tokenization uses wordpieces, not words or characters. This is standard since the original “Attention is All you Need Paper” that introduced the transformer architecture in 2017. Vocabulary size is typically between 32k - 50k depending on the implementation. GPT-2 uses 50k. They include each individual ASCII character plus commonly used combinations of characters. Documentation: https://huggingface.co/docs/transformers/tokenizer_summary
Sea_Kyle t1_j9w1p5t wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
"AGI will want to accomplish something."
Not killing or hurting humans in any way is what we should programm the AGI to accomplish to.
MrTacobeans t1_j9w1obd wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
I don't discount the ramifications of a fully functional AGI but I don't really see even the first few versions of AGI being "world ending". Not a single SOTA model currently can have persistent presence without input. That gives us a very large safety gap for now. Sure if we get a "all you need is X" research paper that defuncts transformers sure. But structurally transformers are still very much input/output algorithms.
That gives us atleast a decent safety gap for the time being. I'm sure we'll figure it out when this next gen of narrow/expert AI starts being released in the next year or so. For now an AGI eliminating humanity is still very much science fiction no matter how convincing current or near future models are.
Cryptizard t1_j9w1idt wrote
Reply to comment by Mortal-Region in Optimism in the Singularity in face of the Fermi-Paradox by [deleted]
That's what the inflationary argument addresses. If every universe creates 10^30 new universes a second (one of the interpretations of cosmic inflation and bubble universes), then at any point in time there will be exponentially more "young" universes than old ones, and so almost every civilization will be the first civilizations in their universes.
Mortal-Region t1_j9w0tqc wrote
Reply to comment by Cryptizard in Optimism in the Singularity in face of the Fermi-Paradox by [deleted]
>The universe is only 14 billion years old, and it will have conditions for life to arise for another 10-100 trillion years.
Which begs an interesting question -- why so early? If the timeline is 7 meters long, why do we happen to find ourselves in the first millimeter? It gets even more acute if you allow for the possibility of digital civilizations. They'd survive the black hole era, so now the timeline is many times the diameter of the Milky Way. Yet here we are in the first millimeter. And that millimeter represents the entire time since the Big Bang. Considering that computers were invented less than a century ago, it all seems very fishy.
adt t1_j9w062r wrote
Reply to comment by QuestionableAI in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
It's a llama. It's 65 billion parameters. Seems better than some of the other crazy acronyms (or muppet characters!).
Brilliant_War4087 t1_j9w0243 wrote
I'll believe it when it can do my homework.
MistakeNotOk6203 OP t1_j9vzxp7 wrote
Reply to comment by Iffykindofguy in Hurtling Toward Extinction by MistakeNotOk6203
How else is pro-alignment sentiment supposed to be spread other than initiating discussion (even if that discussion is just a restatement)? That is how pro-AGI sentiment spread, I think that it's reasonable to assume that it is how pro-alignment sentiment could spread too.
MrTacobeans t1_j9vzw1p wrote
Why are you building this based on a closed API?
You could eventually find something in this adventure and openAI could be like woah let's not go there and block/ruin the work you've done. There are multiple open source models that can be worked into the kind of flow you are creating.
On a side note though leveraging GPT-3 to create even a proto AGI seems incredibly unlikely. If it was possible it would likely be in the news already. You mentioned yourself the memory limit. That's a big chunk of the issue with current AI. Can't keep a "sense of mind" going when half of it is getting deleted every few prompts
Nukemouse t1_j9w6klx wrote
Reply to And Yet It Understands by calbhollo
I don't get the bit about symbolic chatgpt can someone explain it to me