Recent comments in /f/Futurology

Yard-of-Bricks1911 t1_jcb44me wrote

It's easy to say that we can build a robot to do anything and it'll be AI controlled. Whether that works in practice or not, time will tell.

You replace the guy putting caps on toothpaste tubes with a machine, then the guy you fired learns how to fix the machine...has job again. Until we build an AI bot to do that too I suppose.

The Cloud and all such things we do in datacenters often times require obscure manual work which again is easy to say nope w will do that with AI/robotics...and then see how well that would work. Cabling a rack would be interesting to see their thought process, realizing that a PDU line cord isn't attached and having to reach deep behind a whole crap ton of cables to get it attached properly...I suppose a humanoid could do that if trained properly.

So the doom & gloom scenario is we either all have nothing to do and robots & AI do it all for us, and we live with their bad decisions just like we do with human decisions. Cool. And what's our general stipend to be able to buy anything and support ourselves? Or will we just let most of the population starve if they weren't wealthy before AI took their jobs?

History not yet written, but I do feel like a lot of this is moving too quickly. It seems all about eliminating humans and cutting payrolls.

1

minterbartolo t1_jcb3omt wrote

doesn't matter the breath of the client list, hacking and obtaining one patients notes/data is just as egregious as hundreds. you claimed people could not get hacked or manipulated but then pivoted to well the impact is not as wide spread with person vs AI when confronted with facts that disputed your therapy is safer with human. where do you want to move the goal post to next?

2

bound4mexico t1_jcaywmy wrote

Not true. We find and harness uninterested third parties in this manner all the time. Judges, witnesses, notaries, juries, you get the picture. We could make it an official job, and make it people from different counties/states/nations/planets, to make them even less likely to be "interested".

1

Cdn_citizen t1_jcaqglm wrote

Keyword “if”. Which most don’t. What you don’t and all the others on here don’t understand is the reach of an AI therapist is much greater than a human therapist.

Hence why for example people don’t break into convenience stores to steal their customer data but will hack Facebook or Uber servers.

Man this crowd is dense.

0

Captain_Quidnunc t1_jcakff9 wrote

K.

You are listing a bunch of things that are completely irrelevant.

Nobody cares if AI gives them warning messages. And AI only gives you warning messages while the people who programmed it are worried about getting sued.

And it's not legally possible to sue an internet company due to section 230 of the communication decency act. So if consumers don't like them and they decrease profits, they will disappear.

Irrelevant.

Nobody thinks "real therapists" are effective to begin with. So they won't really expect AI therapists to be much if any better. So the bar for acceptance is remarkably low. And it's impossible to sue a "real therapist" if someone commits suicide while under their care.

So again, irrelevant.

If everyone who needed a therapist tried to get care from "real therapists" there would be a shortage of "real therapists" on the order of 30,000 providers at a minimum. With average wait times now of approximately 4-6 months to even get an appointment today. With 70% of therapists in most areas refuse to accept new clients. And most insurance makes it near impossible to get reimbursed.

So to the average person, seeing a "real therapist" isn't even an option.

And last and most important, healthcare in this country is a for-profit industry. The largest expense to any corporation is salary paid to skilled workers. And the more skilled workers they can eliminate from payroll, the more investors make.

So just like all other white collar work, the millisecond a company can fire every single skilled worker and replace their work with a free computer program they will. Because by doing so, the board gets a raise.

And they are well aware that we changed corporate law to make it impossible for individuals to sue companies for anything during the Bush administration. And since then the courts have upheld this.

So there aren't enough "real therapists" to meet demand in the first place.

Nobody cares about the warnings other than the annoyance and they won't last long.

Businesses profit from AI therapists and lose money creating or hiring more "real therapists".

And no company must, or does, fear getting sued because it's not possible to sue them.

Therefore the career "real therapist" will not survive the first round of mass layoffs any more than "real radiologist" or "real computer programmer".

It's a dead career. With a shelf life of approximately 3-5 years.

−1