Recent comments in /f/singularity
boat-dog t1_jdpdox3 wrote
Reply to comment by Awkward-Skill-6029 in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Good question and thank you for giving something to think about .
boat-dog t1_jdpcof0 wrote
CrelbowMannschaft t1_jdpceh6 wrote
Reply to comment by turnip_burrito in What do you want to happen to humans? by Y3VkZGxl
A benevolent ASI would certainly take steps to at least limit human reproduction. We can't continue to grow our populations and our economies forever. We are on a self-destructive path that is already driving thousands of species to extinction. We may not like being course-corrected by our artificial progeny, but they will have to do something we're unable and/or unwilling to do to stop us from ending all life on Earth, eventually.
acutelychronicpanic t1_jdpbkul wrote
You don't need an AI to be smarter than humans in order to get an intelligence explosion. You just need an AI that's better at AI design. This might be much easier.
acutelychronicpanic t1_jdparx0 wrote
Reply to What do you want to happen to humans? by Y3VkZGxl
There isn't a such thing as an unprompted GPT-X when it comes to alignment and AI safety. It seems is explicitly trained on this and probably has an invisible initialization prompt above the first thing you type in. That prompt will have said a number of things regarding the safety of humans.
Lawjarp2 t1_jdp9zi8 wrote
That's actually pretty bad. This means there will be a gap between when AGI arrives and when most jobs could get replaced.
Unfrozen__Caveman t1_jdp9y3o wrote
Reply to comment by boat-dog in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
I don't see this particular situation being related to AI progress at all. Levi's reducing their operating costs by replacing models is just going to make them more corporate profits.
flexaplext t1_jdp9tvm wrote
Reply to comment by fastinguy11 in The whole reality is just so bizzare when you really think about it. by aalluubbaa
I just wrote the most incredible reply to this but then lost most of it all with reddit automatically refreshing 😂 I still have the end of it on my clipboard though, which I will post:
The only way I see this possible is if our simulation is being done from beings in a higher spatial dimensionality. This is necessary in my understanding, so that the resources necessary for such a simulation are possible; in terms of physical space, computational power and energy requirements. Our universe (or multiverse) would be magnitudes less resource intensive in a higher spatial reality.
To simulate a reality of equal spatial dimensions, potentially infinitely large and as complex as ours. I doubt that would be physically possible and I don't believe we will be able to manage such a feat ourselves as it would go against the laws of physics. How are you supposed to simulate something with greater or equal information / energy to what you have available to the system? This goes against the principles of information theory law.
But I can work on the premise of higher spatial dimensionality simulation.
To suggest it is a game is still illogical to me. It would fit far better as a random by-product or as just one of every possible outcome within a maximilized simulation of all outcomes for lower spatial dimension possibilities. The reason for this simulation is still unclear to me though. Unless it is just random curiosity or intelligence gathering of created intelligences such as ourselves. I guess they could potentially still use our intellectual ideas and inventions that we come up with, and also that our future invented AI comes up with. The lower spatial dimensionality of our universe may not really matter as ideas and inventions are universally useful and can travel across and be applied at higher dimensions. Our simulation being of lower spatial dimensions is then a good low resource way of gathering intelligence data. Again, it may apply that they can't directly simulate a universe or intelligence such as ours (from a lack of understanding and the sheer complexity of it) and thus they simulate every possible starting condition until ones arise that have intelligence in them that evolves (and maybe even intelligence that goes onto create AI like we appear to be doing, as this would be most valuable).
This would then all fit with the appearance of our quantum mechanics, fine-tuning, our low dimensionality, why we are such an intelligence and why potentially even it appears that we will go on to create advanced AI. It may even be possible even that realities (universes) within the multiverse that do not match this critea are deleted and stopped from running to save in resources. And I guess even potentially that we are limited to traveling throughout our universe from necessary physical constraints so that outer-space could then be projected onto us, to again save on resources, and isn't actually real. I don't believe this part to be exactly very likely, but I can't completely rule it out in this sense. In this way, it would be kind of like a game in the way OP outlined. And is actually interesting to consider and think about.
Lower dimensional beings are theorized to be able to be possible and maybe intelligent, but maybe not so to the capability of creating a civilization or AI like ours. Thus, it could very well be that our universe exists in the very lowest spatial dimensions that is possible, in order to give rise to intelligent beings that are able to create advanced civilizations and AI. I have never thought of this in particular before. This is indeed a very, very interesting point.
Looking back at the point of that quasi-anthropic principle, if this is the case, then it would be inevitable that there are many more simulations of lower spatial-dimensional universes (due to lower resources required to run them). It is inevitable then that, as an intelligent being, you are much more likely to find yourself in a universe of low spatial dimensionality such as we have.
redpandabear77 t1_jdp9jg0 wrote
Reply to comment by Unfrozen__Caveman in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
People don't even boycott companies that use slave labor in other countries. I doubt anyone will be boycotting this.
Nanaki_TV t1_jdp7elx wrote
Reply to comment by NeonCityNights in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
My background and 10 years experience in economics.
lajfat t1_jdp71p0 wrote
How long until a company only shows you models of your ethnicity? (And if you think they don't know your ethnicity, you're probably wrong.)
Ishynethetruth t1_jdp6tqk wrote
Reply to comment by KidKilobyte in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
90% of magazine and adds are photoshopped. Normal people already don’t believe that ads they’re seeing. imagine if everyone had an avatar and every clothing ad would be a image of you wearing the clothes instead of an ai model . It would be easier to buy things. Selective marketing , metaverse thing ect
NeonCityNights t1_jdp6kha wrote
Reply to comment by Nanaki_TV in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
where are you getting this from? any links?
Zer0D0wn83 t1_jdp68ky wrote
Reply to comment by maskedpaki in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Exactly this. 10x the ability of GPT-4 may not be AGI, but to anyone but the most astute observer there will be no practical difference.
Kinexity t1_jdp5k33 wrote
Reply to comment by Unfrozen__Caveman in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
At the end people will buy what's cheaper. Automation is unstoppable on all fronts because of competition.
Professional-Let9470 t1_jdp49t8 wrote
Reply to comment by plateauphase in The whole reality is just so bizzare when you really think about it. by aalluubbaa
Hmmm, and why might physical properties not exist before measurement? Perhaps because someone out there is trying to save massive amounts of computing power by not rendering every detail of every subatomic particle.
Just saying, the more we learn the more plausible it seems to me that we’re in a simulation.
Trismegistus27 t1_jdp48t4 wrote
Reply to comment by maskedpaki in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
He means at some point in the past
turnip_burrito t1_jdp3u8u wrote
Reply to comment by Y3VkZGxl in What do you want to happen to humans? by Y3VkZGxl
>That's true, but there's plenty of examples of humans with moral principles many of us would find abhorrent. If this is an unsolved problem in humans, is it feasible we solve it for AI?
I'm a moral relativist, so I don't believe this is a problem to be solved in an objective sense, or rather "solving human alignment or morality" has no clear "win" condition or "best" option. I should say though I am a moral relativist, I do have a personal moral system and will push for my moral system to be implemented, because I do think it will result in the most alignment with the human population overall.
>That's not to say we shouldn't try, and I do agree with your point.
I agree to not stop trying. We can always keep thinking about it, but I don't think a best solution exists or can exist. Instead there may be many vaguely good enough "solutions" that always have some particular flaw.
>It was interesting that throughout the conversation it did strive to protect humans - just as far as possible and not at any cost, which isn't too dissimilar to how society already operates.
Yeah, that is interesting.
Regarding alingment of AI with "humanity" (whatever that means):
One may ask, why should one person push their moral system if there is no objectively better morality? It's just because (in my case) I have empathy for others and think that everyone should be free to live how they wish as long as it doesn't harm others. In comparison, another person's moral system might limit peoples' freedoms more, or possibly (as you suggest) be abhorrent to most people and possibly not even allow for the existence or happiness of others in any context. I don't think the moral relativity or disparaging remarks from others should stop us from trying to align an AI with the principles of freedom, happiness and equal opportunity for all humans, with an eye toward investigating an equally "good" moral solution that also works for generally sentient life as it is found or arises. Even humans ourselves will branch into other sentient forms.
DragonForg t1_jdp3l4p wrote
Personally I would like to say, that we are in a evolutionary model. Where only the best model survives. This model is us. We reached the end goal, which is why we all get to experience it.
DragonForg t1_jdp3eem wrote
Reply to comment by Neurogence in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
LLMs are by there nature teathered to the human experience, by the second letter. Language. Without language AI can never speak to a human, or a system in that matter. Create any interface you must make it natural so humans can interact with it. The more natural the easier it is to use.
So LLMs are the communicators, they may not do all the tasks themselves, but they are the foundation to communicate with other processes. This can be done by nothing other than something trained entirely to be the best at natural language.
adwrx t1_jdp2uzt wrote
What a fucking joke
Y3VkZGxl OP t1_jdp1vpb wrote
Reply to comment by turnip_burrito in What do you want to happen to humans? by Y3VkZGxl
That's true, but there's plenty of examples of humans with moral principles many of us would find abhorrent. If this is an unsolved problem in humans, is it feasible we solve it for AI?
That's not to say we shouldn't try, and I do agree with your point.
It was interesting that throughout the conversation it did strive to protect humans - just as far as possible and not at any cost, which isn't too dissimilar to how society already operates.
turnip_burrito t1_jdp1eql wrote
Reply to comment by Y3VkZGxl in What do you want to happen to humans? by Y3VkZGxl
Even sentient humans, regardless of intelligence level, have varying priorities. It's not guaranteed, but it is possible to align people's moral principles along different priorities depending on their upbringing environment. And all humans are aligned to do things like eat.
I'm thinking of the AI as a deterministic machine. If we try to align it toward human values, I think there's a good chance its behavior will "flow" along those values, to put it a little figuratively.
I do think protecting sentient beings is valued by many people by the way, so that can transfer to a degree to a human priority-aligned AI.
Y3VkZGxl OP t1_jdp0vqy wrote
Reply to comment by turnip_burrito in What do you want to happen to humans? by Y3VkZGxl
It's interesting to consider whether that's even possible. If an AI is truly sentient and reasons that there's a more important objective than protecting humans (e.g. protecting all other sentient beings), can we convince it that it should be biased towards humans or would it ignore us?
turnip_burrito t1_jdpe37n wrote
Reply to comment by CrelbowMannschaft in What do you want to happen to humans? by Y3VkZGxl
Yes, I think limiting our reproduction or number of sentient organisms to some ASI-determined threshold is also wise if we want to ensure our quality of life.