Recent comments in /f/Futurology

Psychomadeye t1_ja74jrp wrote

Not exactly. There are many limits to this. For instance, this would require Moore's law to continue to hold true. It will not (failing somewhere in the next couple years). These models can't really work outside of their training space (space as a physical concept will need to change to fix this.) Information can only travel so fast and that's not going to be fixed either (because that's technically time travel). Some might say quantum computers can help, but as someone who is in this field I couldn't imagine how chemistry simulations would help my model run better. Finally, Models don't really understand things like true or false, or cause and effect and there's no clear path to fix that. There are more issues but you've probably got the idea.

These things are at best tools that can help people go faster. Those that are trying to replace workers may have some success in certain things like call centers. But in reality it's not going to make sense to replace people. Especially when you remember how really massive these models are. You can buy five data centers to run one instance or hire five employees to handle calls. And remember, you're going to need to provide a training space for each job you plan to replace. You might not even have the data for that.

2

billtowson1982 t1_ja74aj4 wrote

Reply to comment by net_junkey in So what should we do? by googoobah

1.) Whether AI is sentient not is almost irrelevant for its impact on jobs or pretty much any other aspect of society. Something can be plenty intelligent without being sentient, and even a rather dumb being can still be sentient. AI intelligence (or in other words, capability) will be the main thing that affects society. Not sentience.

2.) No AI today has the complexity of a brain based on any meaningful measurement. Even a brief chat with chatGPT is enough to show a person how stupid it is. Further today's AIs are all absurdly specialized compared to biological actors. Powerful, but in absurdly narrow ways.

1

billtowson1982 t1_ja73q1x wrote

Reply to comment by PO0tyTng in So what should we do? by googoobah

Most big company CEOs are not the biggest shareholders in their companies. If AI really does get to the point where it can do almost every single possible job more efficiently than any and all humans, then CEO jobs are no safer than anyone else's. Zero, one, a few, or in theory (but unlikely in reality) all people will make production decisions for the AI and no one else will do anything of economic value whatsoever. That doesn't mean some humans won't "work" - humans will still be able to make nice wooden tables, for example, but in such a world the AI could make better tables faster, cheaper, and with less waste of resources. For a person to sell a table that they made, they buyer would have to want it because it was made by a human - despite it being inferior in every other way.

2

Shadowkiller00 t1_ja71ydj wrote

Just remember that sentience does not immediately necessitate appreciation of art and beauty. Those come after other needs are taken care of. Think babies and toddlers and how they perceive the world. Think hierarchy of needs. An early AI will spend most of its early days just trying to comprehend the world.

0

Lirdon t1_ja71rfg wrote

You’re assuming that the AI would possess a kind of consciousness that is recognizable for you. That is absolutely will not be the case, unless it is specifically designed to mimic human psyche, or by some improbable miracle it just spontaneously develops it, maybe through the process of deep learning.

But excluding those possibilities, it is very likely the AI intelligence will be nothing that is recognizable to us. It might not interact at all with anything visual, it can possibly be purely process based with no understanding of the physical world at all, where everything it can interact with are software modules.

I personally don’t think that AI gaining consciousness will be an automatic threat to our survival — depending on its role, authority, connectivity and function. It may never develop self preservation imperative, where it will try to identify threats to itself, or an imperative to optimize its surroundings, where we might be a nuisance for.

In any case, it won’t likely be like us, able to love or care for us.

1

nolitos t1_ja6zx89 wrote

> This concept will sell millions of AI dogs. So what happens to the real dogs?

People will finally stop breeding and exploiting them for money and egoistical desire to be "loved".

1

TheEverHumbled t1_ja6za4n wrote

The business model of extractive industries helps explain why this is a no for the forseeable future, and very very likely forever.

Mining and Oil drilling operate based on constraints of physical reality which drives costs(e.g. cost of equipment, workers, etc). Reserves of resources exist in a bunch of places which are simply impractical to extract based on present prices and technology.

The key point is that extractive businesses don't go everywhere and extract everything- they add projects which have the best potential for profits. As more material is extracted, market demand would fall, and make costlier extraction less profitable.

The moon is pretty massive for any forseeable timescale- by the time lunar mining is of any noticeable scale, humanity would likely have spread out a lot more mining activities to the asteroid belt (assuming of course human civilization can reach such a point), and most resources would have cheaper sources(sitting even closer to 0 g environment).

1

Psychomadeye t1_ja6z382 wrote

This is why the UN banned autonomous weapons like this. It would be legal if the weapon had 100% accuracy in facial recognition but otherwise it's use would be a war crime.

5

Psychomadeye t1_ja6yjuy wrote

As a person who works with these things, there's a lot of limitations to these technologies that are ignored by virtually everyone. These things are correlation engines. They're going to take jobs the same way the steam engine took jobs.

3

zoogle15 t1_ja6yexk wrote

One human only has so much time and influence. Make the most of your life. Work to make the world a better place, in your self, in your home, in your community.

People have lived through all that you mention and much much much worse. Humans keep living and thriving despite all the tragedy. Don’t get stuck on anything ever, keep moving forward. You do make a difference.

2