Recent comments in /f/singularity

BSartish t1_jcerzes wrote

Check out this twitter thread.

TLDR: It's about an experiment where this dude gave GPT-4 a budget of $100 and asked it to create a profitable online business in 30 days. GPT-4 came up with an idea of setting up an affiliate marketing site making content around eco-friendly and sustainable living products. It also found a domain name, a hosting service, and an investor for the project. The thread documents the progress of GPT-4’s business venture and its interactions with him and apparently it's now "valued" at 25k.

7

ecnecn t1_jceqr3l wrote

There are not unlimited clients - the available amount of contracts are limited. Before AI just the upper 20% of Fiverr workers got regular jobs equal to an income. The people that already dominated the market can accept more contracts now or work less but then there is already the line of the 2nd, 3rd best developers that are waiting for the free contracts. There is no free niche for newcomers just because the tools made it easier - the other way around. Plus in short time it will become common knowledge how easy and cheap the production cycles have become. It will become so easy for the client to create the boilerplate that they just hire people for "cheap re-adjustments". I predict that the market as we knew it will end soon.

5

FomalhautCalliclea t1_jcdigp0 wrote

Although i agree on the criticism of doomerism and how this new influx in subscribers might influence this place, i always found the conclusion quote by CS Lewis to be utterly vapid and stupid.

It's overlooking the countless millenarisms of the past (you might today call this doomerism), even when unwarranted. But also the tremendous terror humans experienced in the past.

He falls in the same mistake he criticizes: thinking there is novelty, but in our reaction, when it is nothing new either.

And there is no reassuring thought to consider the fact that a grim fate was already predestined to us. It is still unpleasant when lived. And it sure was for the sufferers of the far away past.

What matters during time isn't time itself, but what happens during time.

>If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things

Ironically a very defeatist reaction, one that calls for embracing the daily routine and not revolting abruptly against it, some sort of "remain in your place" call, which isn't surprising when you read:

>praying

ranked among

>working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts

which tells a lot about why this man can see being

> huddled together like frightened sheep

as the only reaction to a terrible danger and suffering.

>They may break our bodies (a microbe can do that) but they need not dominate our minds

With such thoughts, no wonder such a person can reassure themselves in any situation, especially if it allows them to wallow in the comfort of their resigned mind.

0

rdlenke t1_jcdgz54 wrote

Aside from a few "intense" recent reactions to GPT-4, my experience with this sub has been the opposite: blind optimism, complete lack of discussion about the transition period between now and AGI (or more advanced AI tools), ignorance or mockery about genuinely important questions (alignment, legality, the artists situation), people shouting UBI like it's a given, and a lack of non-european/american povs.

So, basically, just the other side of the same coin, really.

The only way to achieve what you want is with heavy moderation (like /r/explainlikeimfive, /r/changemyview or similar subs).

1

Darth-D2 t1_jcddzyg wrote

Thank you for bringing this topic to the discussion. However, I think your post misses some crucial points (or does not highlight them enough).

To reiterate the definition that you have posted yourself: "[...] Accordingly, they might sometimes dedicate their lives to acting in ways they believe will contribute to its rapid yet safe realization."

The majority of active users of this subreddit seem to (1) neither see any risk associated with developing potentially unaligned AI nor (2) do they think that we can do anything about it, so we shouldn't care.

To steelman their view, most Redditors here seem to think that we should achieve the singularity as quickly as possible no matter what because postponing the singularity just prolongs existing suffering that we could supposedly easily solve once we get closer to the singularity. In their view, being concerned about safety risks may postpone this step (this is referred to as the alignment tax among AI safety researchers).

However, a significant proportion of prominent AI researchers are trying to tell the world that AI alignment should be one of our top priorities in the next years. It is consensus among AI safety researchers that this will be likely extremely difficult to get right.

Instead of engaging with this view in a rational informed way, any safety concerns expressed on this sub are just being categorized as "doomersim" and people who are quite educated on this topic are dismissed as being afraid of change/technologies (ironically, those who are concerned are often working on the cutting edge of the technologies and embrace technological changes). To dismiss the concerns as "having a negative knee-jerk reaction by default whenever a development happens" is just irresponsible in my opinion and completely misses the point.

While not everyone can actively work on technical AI Alignment research, it is important that the general public is educated about the potential risks, so that society can push for more effective regulations to ensure that we indeed have a safe realization of advancing AI.

Robert Miles has a really good video about common reactions about AI safety:_https://www.youtube.com/watch?v=9i1WlcCudpU&ab_channel=RobertMiles

EDIT: If someone is new to this topic and shows that they are scared, what are better reactions than calling it doomerism? Direct them to organizations like the ones in the sidebar of this sub so that they can see how others are working on making sure that AI has a positive impact on humanity.

9