Recent comments in /f/Futurology

Cryptizard t1_ja4yhua wrote

You have to just study what you personally find interesting and fulfilling. Generally, that is a good way to get a job as well because the more passionate you are about a subject, the more you want to learn about it, the more competent you will be. So if the singularity comes and nobody has a job, then at least you spent your time on something that was worthwhile to you.

2

Cryptizard t1_ja4y67r wrote

Reply to comment by Mason-B in So what should we do? by googoobah

You are really not following what is going on, or else you have closed your mind so much you can't process it. 90 years for general intelligence? Buddy, 30 years ago we didn't even have the internet. Or cell phones. Computers were .001% as fast as they are now. And technological progress speeds up, not slows down.

I don't think it is coming tomorrow or anything, but look at current AI models and tell me it will take 3 more internets worth of advancement to make them as smart as a human. Humans aren't even that smart!

>Skipping the obvious answer of "programmers will be the last people to be programmed out of a job."

This is a really terrible take. Programmers are going to be among the first to be replaced, or at least most of them. We already have AI models doing a big chunk of programming. Programming is just a language problem, and LLMs have proven to be really good at language.

1

Vandosz t1_ja4xocu wrote

Reply to comment by mrbittykat in So what should we do? by googoobah

I'm on the same boat. I dont see how the system will work. Only way is a complete rethinking of humanity. Maybe some sort of UBI with the condition that you accomplish tasks in your hobbies and get some social contact that way. You cant just tell people to sit at home, people will absolutely go depressed.

2

aka_mythos t1_ja4xcsl wrote

AI is only that way if you look at it as something of a monolithic entity. In a world with AI's there will be many AI's each with their own characteristics and identity. From that we'll get the variability in simulated creativity. Instead if you think of AI as a tool, what you have is something that can transform and actualize the simplest and smallest of human creative thoughts and ideas. The least creative and skilled people will be able to see their ideas given form. A person could write a simple poem and see it transformed into the visual, or extrapolated into a full story, or rendered into a movie. If every similarly capable AI were prompted to do the same task, the result would likely be distinct. Yes, AI acting in isolation will make many things even more disposable than they already are, but the individual can be enabled to create works instilled with personal value. Customization and personalization of media is what AI can bring us.

1

phillythompson t1_ja4xclz wrote

I’m struggling to see how you’re so confident that we aren’t on a path or close.

First, LLMs are neural nets— as our our brains. Second, one could make the argument that humans take in data and output “bullshit”.

So I guess I’m trying to see how we are different given what we’ve seen thus far. I’m again not claiming we are the same, but I am not finding anything showing why we’d be different.

Does that make sense? I guess it seems like your making a concrete claim of “these LLMs aren’t thinking, and it’s certain” and I’m saying, “how can we know that they aren’t similar to us? What evidence is there to show that?”

1

Vandosz t1_ja4xakx wrote

If you are trying to organize your life around robots taking jobs you might want to do. Good luck. If a singularity does happen, probably any job you can think of doing will be replaceable. Its impossible, just do what you enjoy, make money doing it. And thats that. Nothing else you can do

1

awfullotofocelots t1_ja4wn0p wrote

Reply to comment by IcebergSlimFast in So what should we do? by googoobah

The past is the only empirical evidence we have and our remarkable ability at pattern recognition is what's helped us become apex predators and survive as a speciies to this point. Don't throw the baby out with the bathwater.

6

Prophayne_ t1_ja4wgua wrote

I have an honest question, why does a sub labeled futurology like to fear monger so many technological advancements? Yall need to rename the sub to preppers or something.

Things are going to change with the rise of ai, I'm choosing to go into it glass half full.

1

lordrognoth t1_ja4wabc wrote

You underestimate the speed at which AI is already evolving. Technology is in development for years before it goes mainstream, and companies have already been using different types of AI for years. We will see massive displacement of workers within the next 3-5 years. First the AI will take most of the office and creative jobs, then the Teslabots will take all the blue collar jobs. People have already been losing their jobs to Ai, they just didn't know it.

6

gulgin t1_ja4w4t6 wrote

Apologies, I assumed you were talking fusion. Either way the exact same arguments apply. Fission is still necessarily much more mechanically complicated than solar and will never be as reliable or maintenance free. I am also not sure how you are considering nuclear safer than solar, but in the long run the safety stuff gets solved either way so I wouldn’t hold that against either technology. Carbon neutrality is also a very complicated question, environmental impact is a difficult if not impossible thing to holistically judge.

Either way, there will always be situations where solar is a better energy production method than any kind of nuclear, and there will always be situations where any kind of nuclear is better than solar. As the technologies develop that crossover point will swing back and forth.

2

Really_McNamington t1_ja4vs33 wrote

Look, I'm reasonably confident that there will eventually be some sort of thinking machines. I definitely don't believe it's substrate dependent. That said, nothing we're currently doing suggests we're on the right path. Fairly simple algorithms output bullshit from a large dataset. No intentional stance, to borrow from Dennett, means no path to strong AI.

I'm as materialist as they come, but we're nowhere remotely close and LLMs are not the bridge.

1