Recent comments in /f/Futurology

omega1212 t1_ja87py4 wrote

That's fair. I think I just trust crowds more than elites in ethical questions (for logical ones it's the reverse). They're more likely to think of themselves in bad and good positions of hypothetical social arrangements.

And no I haven't! I largely agree with that statement about aligning incentives, if not for the tendency towards regulatory capture, not sure how you account for that

1

trippedbackwards t1_ja8779b wrote

I think that's his point! Some people are suggesting OP practically ignores all the dire projections. He's saying that just because we've survived as a species so far doesn't mean one of these real problems could have unprecedented results. Sure, his uncle is still alive. But he's very lucky. Smoking is, in fact, bad for you and kills millions of people.

He's basically illustrating "survivor bias".

1

Psychomadeye t1_ja85kv4 wrote

Hey real quick, say I spent a years salary on a robot dog. What can it actually do? You'll need at least five for every worker to match the shift time. So I'm wondering what the point is between picking up five of these dogs when I can pay a worker for five years.

1

peadith t1_ja85a8w wrote

Not the way it can be done now. What if you can get everyone to know the same thing instead? That's what is starting to happen. There will always be the opt-out plan, but the stakes will be high. Belief when you can know instead (also called ignorance) is largely an animal flaw that won't hold machines back.

4

Psychomadeye t1_ja851np wrote

>Besides, the workflow is designed for human hands and brains, not for AI.

If we want non human workflow we will need a massive amount of data on that for it to learn the correlation. But I'm at a loss of where anyone would even get data on a non human workflow. These specifically aren't thinking machines. They just know how to generate a point on a graph to look like the rest of the points. They're a really really good dart player. This is why I call them correlation engines. They can't replace workers on their own because the rules of the game change slightly and it'll be months or years of training before it's ready again.

>Humans are screwed then, for their brains are fairly limited.

Our neurons don't suffer the same issues because the sheer size of a human brain expressed as a neural network is larger than we can currently hope to compute yet somehow, training time is seconds and not years, and we have transfer learning at a scale that artificial networks don't have.

>It might be more reasonable to have no TPS reports at all as an example and come up with something that is better suited to AI capabilities.

We need data to train the AI on this new system. This means it will need millions of examples. Then it can spend a few years learning that data. We haven't even gotten into costs yet. Those instances will be costly to run. Newer models might be faster but they are not likely to invent time machines or sub atomic computers without examples of those things.

1

zachster77 t1_ja84vi0 wrote

I think I get what you mean. But I’m not sure “the wisdom of the crowd” serves us well in situations like these. Popularity contests only reward the currently popular.

Have you read Kim Stanley Robinson? He (among others) sometimes writes about Ecological Economics. Tying capitalistic rewards to systems that benefit the planet (and us as one of its animals), could put our long term goals in alignment with our short-term baser instincts.

4

Thin-Limit7697 t1_ja84v0c wrote

>there will be a company that makes AI dogs that don't pee, poop, or bark, it speaks human

Did you know that both Neopets and Digimon were already invented, like, decades ago?

2

ca_kingmaker t1_ja84elv wrote

Lol called it. Let me guess, you don’t live in Canada anymore?

Your criticism in this case is that Canada doesn’t have a space program outside of partnerships, while ignoring that Europe is a cooperative space program of countries many of which are larger populations than Canada.

As they still haven’t launched any of their own people into space.

It’s really quite a dumb criticism.

0

lord_nagleking t1_ja84clv wrote

Some kind of UBI will be necessary, or there will be food and water riots...

I also think it will be more like 15 years. Before AI takes all of our jobs there's going to be a renaissance of new AI tools and "creators," making their own art and videogames and movies, all just by interacting with their "personal assistant."

That will wipe out 20–50 percent of white collar jobs within 10 years.

The robotics revolution, in conjunction with AI, will eventually erode the blue collar jobs. And that's when unemployment is going to get really bad.

The only people who will still be working will be "executives" and "politicians," and they will only be meat puppets.

Unless, of course, we do something about it heh

1

Shadowkiller00 t1_ja83ceh wrote

You have one clear data point on sentience. Your own. When you first became cognitively aware, did you care about art?

We assume life will be carbon based because we are carbon based and we don't have any other data points for other types of life. If you are going to speculate on sentience, you must use what you know as that is the only good data point you have. Since the only creatures we know of with sentience are humans, you must start the conversation there. Any other conversation has no basis in reality and is just speculation without foundation

1

HS_HowCan_That_BeQM t1_ja82sab wrote

When you learned idioms, you probably looked at them from the encountered point of view. Meaning: I'm reading this Russian text and this idiomatic phrase occurs. Oh, that's the equivalent of saying this in my native tongue.

But it is trickier to be looking at an idiomatic native text and intuit: I must replace this with some equivalent when translating this to Russian.

My favorite book about the vagaries and pitfalls of translation is Douglas Hofstadter's "Le Ton beau de Marot: In Praise of the Music of Language". It covers the difficulties of translating idioms, puns et al. And those don't even cover whether translating a poem means "word-for-word", "rhyme-scheme", "thematically" or some other criterion.

So, maybe idioms are not a difficult test of an AI's competency. But I still feel the fundamentals of natural language will be part of the determination.

Edited: to remove a redundant phrase (idiomatic native idiom).

1

Nebula_Zero t1_ja82emi wrote

The robot arm robot from Boston dynamics already is replacing jobs at DHL. It is using AI to run because it is adapting to real world objects and can handle stuff dynamically. Not explicitly just AI since it's a robot too but it is already replacing jobs, not just changing work.

1

KeaboUltra t1_ja829kd wrote

Yes. If it could think independently, there are multiple outcomes in which it would appreciate its creator just as there are ways that it would not, or outright hate. Its affection probably wouldn't be recognizable as it's a machine with a completely different perception and cognitive ability. but that doesn't mean It couldn't find a way to communicate that to you. It's an AGI, modeled to human likeness. the thing would be smart or make itself smart. Analyze how humans behave, human speech, and emotion, to learn how and try to convey that to you the same way we try to do that with animals. People sometimes pretend to act like an animal based off what we learned about them, or learn what an animal likes so that they can express it.

​

>We keep sheep for what they provide for us and, moreover, we exterminate bugs that we find disgusting.

It's not as black and white, you can say the same thing about cows, pigs, chicken or any other animal eaten or used for its byproducts. Not everyone treats pet animals well. If someone saw a random sheep in a farm, they would probably pet it and treat it nicely. bugs are the same, we exterminate them because they are pests that destroy your home or get into your food, yet people keep or admire all sorts of insects like butterflies, caterpillars, beetles, ants, etc. It's all really a dice roll whether AI is kind, mean, indifferent, or just like us.

1