Recent comments in /f/singularity

HeinrichTheWolf_17 t1_j9wkr59 wrote

People have been afraid of change since we starting carving sticks and painting stick figures on walls. It’s good to be cautious, but history has proven time and time again that the fear of change is unwarranted. Evolution is necessary for our advancement, both as a species physically, and as a consciousness.

I’m excited, personally.

12

play_yr_part t1_j9wjphf wrote

this. IDK the timeframe for completely autonomous self driving as it seems to have been "within a decade" for like a decade now lol w but with Tesla's self driving at least, recent updates have sometimes been one step forward two steps back.

Entirely possible another car maker's version could change that in a flash though.

−1

beders t1_j9wj8hw wrote

The operator doesn’t know Chinese. Do I need to spell out the analogy to chatGPT?

ChatGPT is great at word embeddings and completion but is an otherwise dumb algorithm. Comparing that to human’s ability to express themselves with language is useless.

I mean if you don’t get the Chinese room experiment you might think Eliza is a master of psychology.

0

sumiveg t1_j9wd0jy wrote

You've given a lot of example of how people express their lack of enthusiasm, but can you give as many examples of how AI will change things?

I also get frustrated by people who don't realize that we're on the cusp of something utterly transformative. But the truth is, I don't actually understand how our world will change and what that will look like. I know my current job as a content designer will go away. I know that ghost writers, copywriters, and all the other jobs I've had will vanish.

But I don't know what will come in their place.

I feel like I did at the start of the internet. Back then I know something big was happening, but I had no idea that I'd be looking up directions on a phone that i held in my hand. I didn't know I'd be ordering dinner from a laptop and watching movies streamed to my TV. I just knew that big things were coming and nothing would be the same.

5

Difficult_Review9741 t1_j9wb1if wrote

Technical progress is a given, but remember that within those N years that saw immense progress, many ideas also seemed imminent and then fizzled out. We don't live in The Jetsons.

Engineering is hard. Many approaches have limits that are undetectable until you hit them.

LLMs are really impressive, but the reality is that they have very few practical use cases at this point. So why expect people to care that much about it? Future progress is not inevitable.

By the way, there are tons of applications of AI/ML that have been immensely more impactful to society than LLMs have been. And yet no one ever seems to talk about those, because they aren't flashy.

10

Deadboy00 t1_j9w97y5 wrote

True...but that's not the central issue.

A copyright requires human authorship. Even if you could copyright a prompt (you can't), the generated output would not be.

Sure, they're the ongoing lawsuits against ai firms that use copyrighted works to generate their own product. Regardless of the side you wish to come out on top, there is a lot of merit to the suit.

1

DukkyDrake t1_j9w96tv wrote

>We’ve gone from horse and buggy to space stations in 100 years.

>What do people not understand about exponential growth?

None of that have anything to do with the current batch of AI tools being fit for a particular purpose. Nothing to do with if/when those tools will be made sufficiently reliable for unattended operation in the real world.

Some people fail to understand, just because you can imagine something in your mind, that does not necessarily mean others can definitely engineer a working sample within our personal time horizon, or ever.

19

Sandbar101 t1_j9w89im wrote

I could not have said it better myself. You are absolutely, completely 100% right. It makes you feel like reality is gaslighting you, but you know you’re right. And you ARE right. We have maybe 40 years till the end of our human society as we know it. Whatever comes next will be so radically different it will be unrecognizable. And honestly I expect it to be closer to like 20 years. We fundamentally cannot imagine the scope and scale of what AI is capable of. Thats the whole point of calling it the Singularity.

Rest assured, we’re here with you, and we understand.

21

Nukemouse t1_j9w7vss wrote

You know that nothing forever show, and how it looks all buggy and bad and really basic? That's because it was made intentionally primitive using primitive tools and a low budget. That isn't the best AI can do, its the WORST AI can do. If that surprisingly watchable thing is possible using effectively the worst and most primitive tools we have available, then a proper attempt by whatever versions of these tools we have in a year or two will be able to make just about anything. Not just TV shows.

15

adt t1_j9w6x17 wrote

Leave them be.

Listen to the experts.

Connor Leahy was the first to re-create the GPT-2 model back in 2019 (by hand, he knows the tech stack, OpenAI lined up a meeting with him and told him to back off), co-founder of EleutherAI (open-source language models), helped with GPT-J and GPT-NeoX-20B models, advised Aleph Alpha (Europe's biggest language model lab), and is now the CEO of Conjecture.

Dude knows what he's talking about, and is also very careful about his wording (see the NeoX-20B paper s6 pp11 treading carefully around the subject of Transformative AI).

And yet, in Nov/2020, he went on record saying:

​

>“I think GPT-3 is artificial general intelligence, AGI. I think GPT-3 is as intelligent as a human. And I think that it is probably more intelligent than a human in a restricted way… in many ways it is more purely intelligent than humans are. I think humans are approximating what GPT-3 is doing, not vice versa.”
— Connor Leahy, co-founder of EleutherAI, creator of GPT-J (November 2020)

42