Recent comments in /f/singularity

KyleG t1_j9w6d9i wrote

Actually independent creation is a defense against a copyright claim. That means if you can prove that you use an AI to generate the art with your own prompt you would win against someone suing you for infringement because that's an independent creation. It is patent law where independent creation is not a defense.

1

MysteryInc152 t1_j9w5xvg wrote

Far as I know they've just said it's a much better model than GPT 3.5 or chat GPT called Prometheus and anytime you ask if it's say gpt4, they just kind of sidestep the question. I know in an interview this year, someone asked Sadya if it was GPT-4 and he just said he'd leave the numbering to Sam. They're just being weirdly cryptic I think.

1

Lesterpaintstheworld OP t1_j9w529n wrote

The engine to generate token can be changed at any moment. Actually I'm looking forward to being a able to plug it on GPT 3.5 / 4. Also it could be replaced by an open-source counterpart, I am just not aware of any at the moment.

I think no one really knows we're AGI will emerge from. But even having an agent that can be an helpful assistant, even without the "AGI" part, would be quite the success for me. Business applications are numerous

7

helpskinissues t1_j9w47cr wrote

This subreddit lacks imagination as well, it's mostly fanboying OpenAI because chatGPT was the first, criticizing Google, Meta, etc... And being all the time "product over research!!!".

Fortunately some smart people wander these lands.

I am genuinely surprised that people are discovering AI now (in this community) when movies like Ex Machina or even Terminator were done ages ago.

Yes, AI will surpass humans. Yes, we will use technology to enhance our lives and intelligence. Human species without enhancements won't be productive to work in any job in a matter of decades.

Etc etc. It's all known.

But for example I don't see people in this subreddit acknowledging Waymo (or Cruise or even Tesla self driving AI). Waymo is directly changing people's lives and removing taxi jobs right now using an AI system able to ride as good as humans, and nobody gives a damn fuck, we all talk about a chatbot that can write Edgar Allan Poe poems nobody care about. Obviously biased.

Today's release of Llama is one of the most impressive feats in the last months, we'll see major effects of such event in through the year.

62

sticky_symbols t1_j9w3kze wrote

The thing about chatGPT is that everyone talked about it and tried it. I and most ML folks hadn't tried GPT3

Everyone I know of was pretty shocked at how good GPT3 is. It did change timelines in the folks I know of, including the ones that think about timelines a lot as part of their jobs.

1

Shiyayori t1_j9w39q1 wrote

If an AGI has a suitable ability to extrapolate the results of its actions into the future, and we use some kind of reinforcement learning to train it on numerous contexts it should avoid, the it’ll naturally develop an internal representation of all contexts it ought to avoid (which will only ever be as good as it’s training data and ability to generalise across it). Anyway, it’ll recognise that the results of its actions will naturally lead to a context that should be avoided.

I imagine this is similar to how humans do it, and it’s a lot more vague with us. We match our experience to our internal model of what’s wrong and create a metric to determine just how wrong it is, and then we compare that metric against our goals and make a decision based on weather or not we believe this risk metric is too high.

I think the problem might mostly be in finding a balance between solidifying it’s current moral beliefs vs keeping them liquid enough to change and optimise. Our brains are pretty similar in that they become more rigid over time, and stochastically decreasing techniques are often used in optimisation problems

The solution might be in having a corpus of agents developing their own models with a master model that compares each of their usefulness’ against the input data and their rates of stochastic decline.

Or maybe I’m just talking out my ass, who actually knows.

2

ActuatorMaterial2846 t1_j9w2p6b wrote

>Beware the snake oil. They have impressive ML (“Machine Learning”) models built/trained from content, algorithms, and neural networks. That is not “AI” and it is not “AGI”. Beware the snake oil. Remember what it actually is. Don’t fall for the hucksters and word games. twitter.com/cccalum/status…

These comments annoy me. Of course it's AI in every definition of the term.

When you see someone say this, they are simply a denialist refusing to look at objective reality. You could beat someone like this in the head with objective truth and they would deny it with each blow. I will never understand such close minded dogmatic attitudes.

97

Additional-Escape498 t1_j9w2ix6 wrote

LLM tokenization uses wordpieces, not words or characters. This is standard since the original “Attention is All you Need Paper” that introduced the transformer architecture in 2017. Vocabulary size is typically between 32k - 50k depending on the implementation. GPT-2 uses 50k. They include each individual ASCII character plus commonly used combinations of characters. Documentation: https://huggingface.co/docs/transformers/tokenizer_summary

https://huggingface.co/course/chapter6/6?fw=pt

4

MrTacobeans t1_j9w1obd wrote

I don't discount the ramifications of a fully functional AGI but I don't really see even the first few versions of AGI being "world ending". Not a single SOTA model currently can have persistent presence without input. That gives us a very large safety gap for now. Sure if we get a "all you need is X" research paper that defuncts transformers sure. But structurally transformers are still very much input/output algorithms.

That gives us atleast a decent safety gap for the time being. I'm sure we'll figure it out when this next gen of narrow/expert AI starts being released in the next year or so. For now an AGI eliminating humanity is still very much science fiction no matter how convincing current or near future models are.

12

Cryptizard t1_j9w1idt wrote

That's what the inflationary argument addresses. If every universe creates 10^30 new universes a second (one of the interpretations of cosmic inflation and bubble universes), then at any point in time there will be exponentially more "young" universes than old ones, and so almost every civilization will be the first civilizations in their universes.

2

Mortal-Region t1_j9w0tqc wrote

>The universe is only 14 billion years old, and it will have conditions for life to arise for another 10-100 trillion years.

Which begs an interesting question -- why so early? If the timeline is 7 meters long, why do we happen to find ourselves in the first millimeter? It gets even more acute if you allow for the possibility of digital civilizations. They'd survive the black hole era, so now the timeline is many times the diameter of the Milky Way. Yet here we are in the first millimeter. And that millimeter represents the entire time since the Big Bang. Considering that computers were invented less than a century ago, it all seems very fishy.

1

MistakeNotOk6203 OP t1_j9vzxp7 wrote

How else is pro-alignment sentiment supposed to be spread other than initiating discussion (even if that discussion is just a restatement)? That is how pro-AGI sentiment spread, I think that it's reasonable to assume that it is how pro-alignment sentiment could spread too.

3

MrTacobeans t1_j9vzw1p wrote

Why are you building this based on a closed API?

You could eventually find something in this adventure and openAI could be like woah let's not go there and block/ruin the work you've done. There are multiple open source models that can be worked into the kind of flow you are creating.

On a side note though leveraging GPT-3 to create even a proto AGI seems incredibly unlikely. If it was possible it would likely be in the news already. You mentioned yourself the memory limit. That's a big chunk of the issue with current AI. Can't keep a "sense of mind" going when half of it is getting deleted every few prompts

1