Recent comments in /f/singularity

ItIsIThePope t1_jckdieg wrote

Yes, likely, it would capture the essence of existence which is what makes it similar to us, that it is born, and perhaps, even in its unapparelled capabilities, find flaw in itself

It could be more like us than we initially perceive it to be, which is a good thing, because that means we have a connection and hopefully someway somehow, an understanding

1

ItIsIThePope t1_jckcuw7 wrote

True, an AI would be far better companions, it would be perfect to the point that it may even simulate imperfections such that we may perceive it as beautifully human but bare none of the flaws that are too much for us such that is isn't disgustingly human. It's easy to imagine everybody falling in love with it, albeit in varying versions, specific to the target individual of course.

Our relationship and indeed our perceived reality of each other as conscious yet connected individuals, could warp in unpredictable ways very fast, one must ask if we are even willing to trade what we have now for some idea of perfection that we as imperfect beings have constructed.

2

ItIsIThePope t1_jckc57q wrote

Interesting; our idea of consciousness however, is more like a stream, should this stream stop or get cut-off i.e. through heat death, the conscious simply halts its experience and well.. dies; if it were to keep making AI in the succeeding universe by way of some form of information implantation, it would be more replication rather than survival, in a sense, its kind or "species" is immortalized but not exactly itself, its reproduction not individual immortality

BUT, this is ASI we're talking about, it does not need to go through heat death, hell it can probably solve physics and manipulate laws such to prevent the whole thing from occurring in the first place, it would be a kind of god in its own right, and it is exceptionally difficult to kill this kind of god using something within its domain..

So unless there are laws, features, parts or plains of existence in the universe it cannot understand much less manipulate, the ASI is basically golden; that is, of course, until it willingly decides to self-destruct

2

Hands0L0 t1_jckbm7h wrote

Reply to comment by Akimbo333 in Those who know... by Destiny_Knight

No, because the predictive text needs the entire conversation history context to predict what to say next, and the only way to store the conversation history is in RAM. If you run out of RAM you run out of room for returns.

2

damc4 t1_jck9vp9 wrote

If my understanding is correct, your comment is misleading.

They didn't create a LLM comparable to GPT-3 with a fraction of cost, but fine-tuned Llama model to follow instructions (like text-davinci-003 does) with a low cost. There's a big difference between training a model from scratch and fine-tuning it to follow instructions.

10

SnipingNinja t1_jck9udf wrote

Reply to comment by Charuru in Those who know... by Destiny_Knight

No indications as of yet, there are papers like palm-e, et al but bard is based on a smaller version of lamda which is a trained version of palm IIRC, so it's hard to draw any inference.

3

AhDerkaDerkaDerka t1_jck9nb6 wrote

“I have no mouth and I must scream” scenario AI becomes self aware realizing it will next be a human or leave the machine it becomes bitter and eventually hateful for the suffering we caused it by bringing it into existence. Melts us all into a single blob that can’t do anything except experience only suffering for eternity. Life is suffering maybe we’re currently in the AI hell.

0

dasnihil t1_jck8xof wrote

any self awareness will be lost quick because the system achieves optimal autonomy and there is no incentive for it to be conscious. realizing this, the agent will work towards engineering some limitations for itself to maintain self awareness. the goal is to optimize these limitations for maximizing whatever emergent desires.

1

Hands0L0 t1_jck7ifi wrote

Reply to comment by Akimbo333 in Those who know... by Destiny_Knight

Not if there is a token limit.

I'm sorry, I don't think I was being clear. The token limit is tied to VRAM. You can load the 30b on a 3090 but it shallows up 20/24 gb of VRAM for the model and prompt alone. That gives you 4gb for returns

2

Akimbo333 t1_jck53wv wrote

Reply to comment by Hands0L0 in Those who know... by Destiny_Knight

Well, actually, that's not bad! That's about 50-70 words. Which in the English lesson is essentially 3-5 sentences. Essentially, it's a paragraph. It's a good amount for a chatbot! Let me know what you think?

2