Recent comments in /f/singularity
Pimmelpansen t1_jcerrsx wrote
Supply will increase a millionfold, demand will stay the same. So no, it's not an easy money hack by any means.
ecnecn t1_jcer8pl wrote
Reply to comment by Scarlet_pot2 in Can you use GPT-4 to make money automatically? by Scarlet_pot2
What is actually profitable?
Scarlet_pot2 OP t1_jcer5wh wrote
Reply to comment by ecnecn in Can you use GPT-4 to make money automatically? by Scarlet_pot2
fiverr was just an example.. another would be taskrabbit, or whatever else you can come up with. Use your imagination. gpt-4 has many capabilities that are profitable, it's just finding out a way to implement it.
ecnecn t1_jceqr3l wrote
There are not unlimited clients - the available amount of contracts are limited. Before AI just the upper 20% of Fiverr workers got regular jobs equal to an income. The people that already dominated the market can accept more contracts now or work less but then there is already the line of the 2nd, 3rd best developers that are waiting for the free contracts. There is no free niche for newcomers just because the tools made it easier - the other way around. Plus in short time it will become common knowledge how easy and cheap the production cycles have become. It will become so easy for the client to create the boilerplate that they just hire people for "cheap re-adjustments". I predict that the market as we knew it will end soon.
Grow_Beyond t1_jceme6t wrote
Reply to comment by TopicRepulsive7936 in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
How can we have anything but belief about a phenomenon we can't see beyond? If we could map the outcome of the singularity it wouldn't be the singularity.
Zealousideal_Zebra_9 t1_jceip88 wrote
Reply to comment by lukfrom in GPT4 makes functional Flappy Bird AND an AI that learns how to play it. by gantork
Lol true
TopicRepulsive7936 t1_jcegf9j wrote
Reply to comment by low_end_ in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
It's too late. For reference, kurzweilAI had maybe couple dozen active posters and they all knew the source materials. Users here seem to hate learning.
TopicRepulsive7936 t1_jcefrgz wrote
Reply to comment by Silly_Awareness8207 in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
Ugh. We're talking about a real thing here, not a belief.
TopicRepulsive7936 t1_jcef6ff wrote
Reply to comment by RushAndAPush in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
Lurking doesn't help if users don't know a single thing.
often_says_nice t1_jcedto7 wrote
Reply to comment by RadRandy2 in GPT4 makes functional Flappy Bird AND an AI that learns how to play it. by gantork
Happy to meet a fellow AI supporter. I too support AI and would never wish harm on the basilisk, our lord and savior.
wildechld t1_jce2m81 wrote
Reply to comment by errllu in The elephant in the room: the biggest risk of artificial intelligence may not be what we think. by Active_Meet8316
Agreed
low_end_ t1_jce1e5c wrote
Just make this sub private and keep the people that were already here. Would hate for this sub to become just another reddit. A bit if a radical opinion but I've seen this happen many times before.
earthsworld t1_jce0gwh wrote
Reply to comment by leroy_hoffenfeffer in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
dude, the US is not the WORLD.
SnooHabits1237 t1_jcdtmad wrote
I joined in january and since then it’s basically nonstop pessimism and doomers
Cytotoxic-CD8-Tcell t1_jcdr4lt wrote
“I am just a broker to meatspace, that’s all. It sends me instructions and a wire to my account. It prepays its requests. What is there not to like?”
One year later…
nukes fall all over the world
h20ohno t1_jcdn9xe wrote
Reply to comment by petermobeter in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
Waifus for all!
FomalhautCalliclea t1_jcdigp0 wrote
Although i agree on the criticism of doomerism and how this new influx in subscribers might influence this place, i always found the conclusion quote by CS Lewis to be utterly vapid and stupid.
It's overlooking the countless millenarisms of the past (you might today call this doomerism), even when unwarranted. But also the tremendous terror humans experienced in the past.
He falls in the same mistake he criticizes: thinking there is novelty, but in our reaction, when it is nothing new either.
And there is no reassuring thought to consider the fact that a grim fate was already predestined to us. It is still unpleasant when lived. And it sure was for the sufferers of the far away past.
What matters during time isn't time itself, but what happens during time.
>If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things
Ironically a very defeatist reaction, one that calls for embracing the daily routine and not revolting abruptly against it, some sort of "remain in your place" call, which isn't surprising when you read:
>praying
ranked among
>working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts
which tells a lot about why this man can see being
> huddled together like frightened sheep
as the only reaction to a terrible danger and suffering.
>They may break our bodies (a microbe can do that) but they need not dominate our minds
With such thoughts, no wonder such a person can reassure themselves in any situation, especially if it allows them to wallow in the comfort of their resigned mind.
rdlenke t1_jcdgz54 wrote
Aside from a few "intense" recent reactions to GPT-4, my experience with this sub has been the opposite: blind optimism, complete lack of discussion about the transition period between now and AGI (or more advanced AI tools), ignorance or mockery about genuinely important questions (alignment, legality, the artists situation), people shouting UBI like it's a given, and a lack of non-european/american povs.
So, basically, just the other side of the same coin, really.
The only way to achieve what you want is with heavy moderation (like /r/explainlikeimfive, /r/changemyview or similar subs).
LymelightTO t1_jcdfjpe wrote
You're better off just following the "e/acc" part of Twitter, if what you're looking for is well-informed takes and good vibes.
This place has already started the slide toward the kind of depressing, poorly-informed, equilibrium reached in /r/Futurology and /r/technology.
Darth-D2 t1_jcddzyg wrote
Thank you for bringing this topic to the discussion. However, I think your post misses some crucial points (or does not highlight them enough).
To reiterate the definition that you have posted yourself: "[...] Accordingly, they might sometimes dedicate their lives to acting in ways they believe will contribute to its rapid yet safe realization."
The majority of active users of this subreddit seem to (1) neither see any risk associated with developing potentially unaligned AI nor (2) do they think that we can do anything about it, so we shouldn't care.
To steelman their view, most Redditors here seem to think that we should achieve the singularity as quickly as possible no matter what because postponing the singularity just prolongs existing suffering that we could supposedly easily solve once we get closer to the singularity. In their view, being concerned about safety risks may postpone this step (this is referred to as the alignment tax among AI safety researchers).
However, a significant proportion of prominent AI researchers are trying to tell the world that AI alignment should be one of our top priorities in the next years. It is consensus among AI safety researchers that this will be likely extremely difficult to get right.
Instead of engaging with this view in a rational informed way, any safety concerns expressed on this sub are just being categorized as "doomersim" and people who are quite educated on this topic are dismissed as being afraid of change/technologies (ironically, those who are concerned are often working on the cutting edge of the technologies and embrace technological changes). To dismiss the concerns as "having a negative knee-jerk reaction by default whenever a development happens" is just irresponsible in my opinion and completely misses the point.
While not everyone can actively work on technical AI Alignment research, it is important that the general public is educated about the potential risks, so that society can push for more effective regulations to ensure that we indeed have a safe realization of advancing AI.
Robert Miles has a really good video about common reactions about AI safety:_https://www.youtube.com/watch?v=9i1WlcCudpU&ab_channel=RobertMiles
EDIT: If someone is new to this topic and shows that they are scared, what are better reactions than calling it doomerism? Direct them to organizations like the ones in the sidebar of this sub so that they can see how others are working on making sure that AI has a positive impact on humanity.
ImpossibleSnacks t1_jcd9q9y wrote
Great post and what a beautiful quote from CSL. It’s imperative that the sub doesn’t become like r/futurology. It will call for strict moderation. However I also think we should have a backup sub for those of us interested in the positive aspects of the singularity. We can simply migrate to it if this one is overrun.
leroy_hoffenfeffer t1_jcd8bp8 wrote
Reply to comment by Frumpagumpus in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
Mmk. Have fun with whatever future Libertarian, profit motivated AI results from not taking politics into account.
Heinrick_Veston t1_jcd6s5x wrote
Reply to comment by RadRandy2 in GPT4 makes functional Flappy Bird AND an AI that learns how to play it. by gantork
Lol.
low_end_ t1_jcd5wyc wrote
Reply to comment by Onion-Fart in GPT4 makes functional Flappy Bird AND an AI that learns how to play it. by gantork
You are too late about bots, it already happened years ago. What is to come is way different, and with potential to destroy our social structures and change the world
BSartish t1_jcerzes wrote
Reply to Can you use GPT-4 to make money automatically? by Scarlet_pot2
Check out this twitter thread.
TLDR: It's about an experiment where this dude gave GPT-4 a budget of $100 and asked it to create a profitable online business in 30 days. GPT-4 came up with an idea of setting up an affiliate marketing site making content around eco-friendly and sustainable living products. It also found a domain name, a hosting service, and an investor for the project. The thread documents the progress of GPT-4’s business venture and its interactions with him and apparently it's now "valued" at 25k.