Recent comments in /f/singularity

DowntownYou5783 t1_jaa837o wrote

I liked the story. These LLMs are already remarkable. Hard to imagine where things are heading. But I'm curious as heck to see how things turn out. Here's what ChatGPT had to say in response to the following prompt:

Write a nice response in Reddit to the following story [copied your ChatGPT story]:

Wow, what an incredible story! Thank you for sharing your experience with us. It's fascinating to hear about your experiment with the brain-computer interface and the unexpected results that followed.

It's amazing to think about the potential of technology and how it could change our lives in ways we can't even imagine. Your story certainly brings up some interesting questions about the nature of consciousness and the relationship between humans and machines.

It's also intriguing to hear about your communication with the future AI system and the insights it has given you. While some may be skeptical of such claims, it's important to keep an open mind and consider the possibilities that advanced technology could bring.

Thank you again for sharing your story with us. It's truly mind-blowing, and I'm sure it will inspire many others to think about the future of technology and its potential impact on our lives.

3

pnartG t1_jaa7uyz wrote

As I explained above, UBI is not going to happen. UBI, like other types of social safety-nets (health insurance, publically-funded college eduction, social security, etc) is a project of the left. But the world is shifting to the right. All over the world right-wing, authoritarians are coming into power. Who do you think will win in the US 2024 elections? And the right does not like UBI or any other social safety net programmes.

The "bubble of bliss", as Yuli-Ban, above, calls it, that we're living in is the historical exception. Throughout all of human history there were no "social safety nets". The common people died by their millions in plagues, famines, wars or just general chaos. But the species survived and will continue to do so.

1

Arachnophine t1_jaa76vg wrote

This is also assuming it doesn't just do something we don't understand at all, which it almost certainly would. Maybe it thinks of a way to shuffle the electrons around in its CPU to create a rip in spacetime and the whole galaxy falls into an alternate dimension where the laws of physics favor the AI and organic matter spontaneously explodes. We just don't know.

We can't foresee the actions an unaligned ASI would take in the same way that a housefly can't foresee the danger of an electric high-voltage fly trap. There's just not enough neurons and intelligence to comprehend it.

2

drsimonz t1_jaa68ou wrote

The thing is, by definition we can't imagine the sorts of strategies a superhuman intelligence might employ. A lot of the rhetoric against worrying about AGI/ASI alignment focuses on "solving" some of the examples people have come up with for attacks. But these are just that - examples. The real attack could be much more complicated or unexpected. A big part of the problem, I think, is that this concept requires a certain amount of humility. Recognizing that while we are the biggest, baddest thing on Earth right now, this could definitely change very abruptly. We aren't predestined to be the masters of the universe just because we "deserve" it. We'll have to be very clever.

1

XvX_k1r1t0_XvX_ki t1_jaa5bxh wrote

Most of what you said I agree to. If i felt certain that AI will in fact replace these biological systems by something even a little bit better then I would agree. But we are not even fully sure if it will happen during our lifetime(i hope so). Also it is very likely that road to this will be very bumpy.

Hard to predict what is going to happen but it is possible that society will become very unstable. And before you get your hands on that new piece of tech that will let you hack you brain and lymbic system there are many things that can prevent you from getting it. New feudalism, riots, anti tech movements, AI wars, global warming etc.

Bettering yourself right now using hard work and other classical evolutionary tools will vastly improve you chances of getting out of those societal changes alive and with most possibilities you can. People who worked theirs asses off and accumulated some assets have far bigger chances to survive what is coming than someone that just waits for it to happen.

Also it is unnerving that our whole existence happened by chance and very long and brutal processes like natural selection but it doesn't necessarily diminish what these processes accomplish.

Chosing some process and calling "it is just [insert any explanation]" is very irresponsible because in the end we are just moving atoms and smaller particles.

3

alexiuss t1_jaa53pw wrote

No.

Here's the giant problem, in both MJ and OPENAI's GPT3 porn/wrong-think censors are absolute trash, they cause false positives resulting in a very, VERY high % of failure of inquiry even when the topic isn't porn or controversial. If you work with image makers and LLM as much as I do, over 14 hours a day, you would notice a pattern of failure and get incredibly frustrated by it too.

You simply don't notice that you're being censored because you don't pay attention, don't need to work with coherent narrative flow for writing.

MJ censors people in bikini and drawing zombies - the word "corpse" is banned, that is NOT god damn porn. The list of banned words in MJ is huge and they keep expanding it every week with new words without letting anyone know what they are: https://decentralizedcreator.com/list-of-banned-words-in-midjourney-discord/

GPT3 censored concept writing about battles of supervillains vs heroes, which is NOT fucking porn either.

Something doesn't have to be porn for the idiotic, poorly written censor software implemented by corporations to mistakenly assume it's wrong-think. The current censor AIS are absolute, asinine trash. I have specialized scripts that catch the AI output before the result is deleted and its not porn, I assure you. It's just false positives.

You do not want to live in a world where hugs are censored by an AI overlord.

4

stardust_dog t1_jaa521h wrote

We will have two “machines” that can quite literally do everything and they will be boring as tuck to us. These two will be the culmination of 100s of other advances though.

One will be able to capture all information in an area (as time goes that area size increases) meaning all atoms and their bonds and where they are in relation to a fixed point. This is no small feat and we could probably only do something like this in a VERY small area right now. That capture goes for is too obviously.

Two: Use nanotechnology to put those that same configuration together. Also with variations if needed meaning you live forever and retain your memories. You can teleport, we can even time travel back to previous stored instances where needed and practical. A boring form of time travel but it is time travel nonetheless.

1

stupendousman t1_jaa4s4n wrote

> The problem is, and a lot of humans would agree is that that's super intelligence they decide that 2 billion less people of this Earth is the best way forward

Well there are many powerful people who believe that right now.

Many of the fears about AI already exist. State organizations killed 100s of millions of people in the 20th century.

Those same organization have come up with many marketing and indoctrination strategies to make people support them.

AI(s) could do this as well.

That's a danger. But the danger has already occurred, is occurring. Look at Yemen.

3

RabidHexley t1_jaa3go2 wrote

> purely to see what if any hidden underlying structures humanity has collectively missed

This is one of the things I feel has real potential even for "narrow" AI as far as expanding human knowledge. Something may very well be within the scope of known human science without humans ever realizing it. If you represented all human knowledge as a sphere it'd probably have a composition as porous as a sponge.

AI doesn't necessarily need to be able to reason "beyond" current human understanding to expand upon known science, but simply make connections we're unable to see.

2

Donkeytonkers t1_jaa2h4g wrote

True about the API access but that is only a matter of time (very short time around the corner) until Bing enters the ring with their API. Not to mention any number of other large players IE Tencent, Amazon, Meta, Google etc. once they start giving API access there will be hundreds if not thousands of AI branches coming out at exponential accelerated pace once we start mastering AI coded apps.

2