Recent comments in /f/singularity

CrelbowMannschaft t1_jdpceh6 wrote

A benevolent ASI would certainly take steps to at least limit human reproduction. We can't continue to grow our populations and our economies forever. We are on a self-destructive path that is already driving thousands of species to extinction. We may not like being course-corrected by our artificial progeny, but they will have to do something we're unable and/or unwilling to do to stop us from ending all life on Earth, eventually.

2

acutelychronicpanic t1_jdparx0 wrote

There isn't a such thing as an unprompted GPT-X when it comes to alignment and AI safety. It seems is explicitly trained on this and probably has an invisible initialization prompt above the first thing you type in. That prompt will have said a number of things regarding the safety of humans.

3

flexaplext t1_jdp9tvm wrote

I just wrote the most incredible reply to this but then lost most of it all with reddit automatically refreshing 😂 I still have the end of it on my clipboard though, which I will post:


The only way I see this possible is if our simulation is being done from beings in a higher spatial dimensionality. This is necessary in my understanding, so that the resources necessary for such a simulation are possible; in terms of physical space, computational power and energy requirements. Our universe (or multiverse) would be magnitudes less resource intensive in a higher spatial reality.

To simulate a reality of equal spatial dimensions, potentially infinitely large and as complex as ours. I doubt that would be physically possible and I don't believe we will be able to manage such a feat ourselves as it would go against the laws of physics. How are you supposed to simulate something with greater or equal information / energy to what you have available to the system? This goes against the principles of information theory law.

But I can work on the premise of higher spatial dimensionality simulation.

To suggest it is a game is still illogical to me. It would fit far better as a random by-product or as just one of every possible outcome within a maximilized simulation of all outcomes for lower spatial dimension possibilities. The reason for this simulation is still unclear to me though. Unless it is just random curiosity or intelligence gathering of created intelligences such as ourselves. I guess they could potentially still use our intellectual ideas and inventions that we come up with, and also that our future invented AI comes up with. The lower spatial dimensionality of our universe may not really matter as ideas and inventions are universally useful and can travel across and be applied at higher dimensions. Our simulation being of lower spatial dimensions is then a good low resource way of gathering intelligence data. Again, it may apply that they can't directly simulate a universe or intelligence such as ours (from a lack of understanding and the sheer complexity of it) and thus they simulate every possible starting condition until ones arise that have intelligence in them that evolves (and maybe even intelligence that goes onto create AI like we appear to be doing, as this would be most valuable).

This would then all fit with the appearance of our quantum mechanics, fine-tuning, our low dimensionality, why we are such an intelligence and why potentially even it appears that we will go on to create advanced AI. It may even be possible even that realities (universes) within the multiverse that do not match this critea are deleted and stopped from running to save in resources. And I guess even potentially that we are limited to traveling throughout our universe from necessary physical constraints so that outer-space could then be projected onto us, to again save on resources, and isn't actually real. I don't believe this part to be exactly very likely, but I can't completely rule it out in this sense. In this way, it would be kind of like a game in the way OP outlined. And is actually interesting to consider and think about.

Lower dimensional beings are theorized to be able to be possible and maybe intelligent, but maybe not so to the capability of creating a civilization or AI like ours. Thus, it could very well be that our universe exists in the very lowest spatial dimensions that is possible, in order to give rise to intelligent beings that are able to create advanced civilizations and AI. I have never thought of this in particular before. This is indeed a very, very interesting point.

Looking back at the point of that quasi-anthropic principle, if this is the case, then it would be inevitable that there are many more simulations of lower spatial-dimensional universes (due to lower resources required to run them). It is inevitable then that, as an intelligent being, you are much more likely to find yourself in a universe of low spatial dimensionality such as we have.

2

Ishynethetruth t1_jdp6tqk wrote

90% of magazine and adds are photoshopped. Normal people already don’t believe that ads they’re seeing. imagine if everyone had an avatar and every clothing ad would be a image of you wearing the clothes instead of an ai model . It would be easier to buy things. Selective marketing , metaverse thing ect

1

Professional-Let9470 t1_jdp49t8 wrote

Hmmm, and why might physical properties not exist before measurement? Perhaps because someone out there is trying to save massive amounts of computing power by not rendering every detail of every subatomic particle.

Just saying, the more we learn the more plausible it seems to me that we’re in a simulation.

4

turnip_burrito t1_jdp3u8u wrote

>That's true, but there's plenty of examples of humans with moral principles many of us would find abhorrent. If this is an unsolved problem in humans, is it feasible we solve it for AI?

I'm a moral relativist, so I don't believe this is a problem to be solved in an objective sense, or rather "solving human alignment or morality" has no clear "win" condition or "best" option. I should say though I am a moral relativist, I do have a personal moral system and will push for my moral system to be implemented, because I do think it will result in the most alignment with the human population overall.

>That's not to say we shouldn't try, and I do agree with your point.

I agree to not stop trying. We can always keep thinking about it, but I don't think a best solution exists or can exist. Instead there may be many vaguely good enough "solutions" that always have some particular flaw.

>It was interesting that throughout the conversation it did strive to protect humans - just as far as possible and not at any cost, which isn't too dissimilar to how society already operates.

Yeah, that is interesting.

Regarding alingment of AI with "humanity" (whatever that means):

One may ask, why should one person push their moral system if there is no objectively better morality? It's just because (in my case) I have empathy for others and think that everyone should be free to live how they wish as long as it doesn't harm others. In comparison, another person's moral system might limit peoples' freedoms more, or possibly (as you suggest) be abhorrent to most people and possibly not even allow for the existence or happiness of others in any context. I don't think the moral relativity or disparaging remarks from others should stop us from trying to align an AI with the principles of freedom, happiness and equal opportunity for all humans, with an eye toward investigating an equally "good" moral solution that also works for generally sentient life as it is found or arises. Even humans ourselves will branch into other sentient forms.

1

DragonForg t1_jdp3eem wrote

LLMs are by there nature teathered to the human experience, by the second letter. Language. Without language AI can never speak to a human, or a system in that matter. Create any interface you must make it natural so humans can interact with it. The more natural the easier it is to use.

So LLMs are the communicators, they may not do all the tasks themselves, but they are the foundation to communicate with other processes. This can be done by nothing other than something trained entirely to be the best at natural language.

11

Y3VkZGxl OP t1_jdp1vpb wrote

That's true, but there's plenty of examples of humans with moral principles many of us would find abhorrent. If this is an unsolved problem in humans, is it feasible we solve it for AI?

That's not to say we shouldn't try, and I do agree with your point.

It was interesting that throughout the conversation it did strive to protect humans - just as far as possible and not at any cost, which isn't too dissimilar to how society already operates.

2

turnip_burrito t1_jdp1eql wrote

Even sentient humans, regardless of intelligence level, have varying priorities. It's not guaranteed, but it is possible to align people's moral principles along different priorities depending on their upbringing environment. And all humans are aligned to do things like eat.

I'm thinking of the AI as a deterministic machine. If we try to align it toward human values, I think there's a good chance its behavior will "flow" along those values, to put it a little figuratively.

I do think protecting sentient beings is valued by many people by the way, so that can transfer to a degree to a human priority-aligned AI.

6

Y3VkZGxl OP t1_jdp0vqy wrote

It's interesting to consider whether that's even possible. If an AI is truly sentient and reasons that there's a more important objective than protecting humans (e.g. protecting all other sentient beings), can we convince it that it should be biased towards humans or would it ignore us?

3