Recent comments in /f/Futurology

skillywilly56 t1_j9lhcnn wrote

Terrorism 101: how to be the very best terrorist you can be! From constructing your very own IED to Mass Shootings, we can help you kill some innocent people! Written by Khalid Sheik Mohammed

And blazoned across the front of the book store and ads on bus stops and billboards, radio and tv ads: New York Times best selling! 10/10 Some random book reviewer, the Ultimate Guide to help you up your terrorist game-Good Reads, If you read this

Terrorist type activity increases…could this be linked to the sales of this book which you advertised heavily?

No we just sell books, not content, the content is the problem not the advertising or the sale of the book.

But you wouldn’t have been able to make all those sales without advertising…

We take no responsibility for the content.

But you made money from the content?

Yes

But no one would’ve known about the book if you hadn’t advertised it and marketed it heavily.

We can’t know that for sure, but we have a responsibility to our shareholders to make profit anyway possible…

Even by advertising harmful material?

Yes

0

CesareGhisa t1_j9lfss8 wrote

software like chatgpt just take text and reshuffle it. it may talk about emotions but its just reshuffled text. it does not think, it does not understand anything. its silicon, a piece of plastic. its ridiculous even just discussing about it.

0

FuturologyBot t1_j9lc28e wrote

The following submission statement was provided by /u/wsj:


Call centers are the testing grounds for a future workplace where AI plays more and more of a role — whether human employees like it or not.

From Lisa Bannon:

>A new generation of artificial intelligence is rolling out across American workplaces and it is prompting a power struggle between humans and machines.
>
>Recent advances in technologies such as ChatGPT, natural-language processing and biometrics, along with the availability of huge amounts of data to train algorithms, has accelerated efforts to automate some jobs entirely, from pilots and welders to cashiers and food servers. McKinsey & Co. estimates that 25% of work activities in the U.S. across all occupations could be automated by 2030.
>
>Today, however, AI’s biggest impact comes from changing the jobs rather than replacing them. “I don’t see a job apocalypse being imminent. I do see a massive restructuring and reorganization—and job quality is an issue,” said Erik Brynjolfsson, director of the Stanford Digital Economy Lab. McKinsey estimates 60% of the 800 occupations listed by the Bureau of Labor Statistics could see a third of their activities automated over the coming decades.
>
>For workers, the technology promises to eliminate the drudgery of dull, repetitive tasks such as data processing and password resets, while synthesizing huge amounts of information that can be accessed instantly.
>
>But when AI handles the simple stuff, say labor experts, academics and workers, humans are often left with more complex, intense workloads. When algorithms assume more human decision-making, workers with advanced skills and years of experience can find their roles diminished. And when AI is used to score human behaviors and emotions, employees say the technology isn’t reliable and is vulnerable to bias.

Read more, free with email registration: https://www.wsj.com/articles/ai-chatgpt-chatbot-workplace-call-centers-5cd2142a?mod=wsjreddit

-mc


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1199t01/ai_in_the_workplace_is_already_here_the_first/j9l5vbt/

1

MagicManTX84 t1_j9l9zo7 wrote

Messengers will stay around for a while until their AI can figure out that your AI is filtering out the messages you don’t want to see. Good AI - takes you off the list and politely goes away. Bad AI - tries to do something radical to force you to see their messages. Which wolf wins?

2

EBJLEnjoyer t1_j9l7v67 wrote

>Amazing! I think life extension is coming sooner than people think. I hope it comes in time for my parents to take advantage of it, too.

Me too!

>Will people please stop the billionaire circlejerk, no medicine has ever been available only to billionaires. Altered Carbon is not a documentary

People have to realize that billionaires are businessmen first and foremost. Businessmen want to make money more than anything else, and an anti-aging drug would absolutely make them trillionaires.

1

wsj OP t1_j9l5vbt wrote

Call centers are the testing grounds for a future workplace where AI plays more and more of a role — whether human employees like it or not.

From Lisa Bannon:

>A new generation of artificial intelligence is rolling out across American workplaces and it is prompting a power struggle between humans and machines.
>
>Recent advances in technologies such as ChatGPT, natural-language processing and biometrics, along with the availability of huge amounts of data to train algorithms, has accelerated efforts to automate some jobs entirely, from pilots and welders to cashiers and food servers. McKinsey & Co. estimates that 25% of work activities in the U.S. across all occupations could be automated by 2030.
>
>Today, however, AI’s biggest impact comes from changing the jobs rather than replacing them. “I don’t see a job apocalypse being imminent. I do see a massive restructuring and reorganization—and job quality is an issue,” said Erik Brynjolfsson, director of the Stanford Digital Economy Lab. McKinsey estimates 60% of the 800 occupations listed by the Bureau of Labor Statistics could see a third of their activities automated over the coming decades.
>
>For workers, the technology promises to eliminate the drudgery of dull, repetitive tasks such as data processing and password resets, while synthesizing huge amounts of information that can be accessed instantly.
>
>But when AI handles the simple stuff, say labor experts, academics and workers, humans are often left with more complex, intense workloads. When algorithms assume more human decision-making, workers with advanced skills and years of experience can find their roles diminished. And when AI is used to score human behaviors and emotions, employees say the technology isn’t reliable and is vulnerable to bias.

Read more, free with email registration: https://www.wsj.com/articles/ai-chatgpt-chatbot-workplace-call-centers-5cd2142a?mod=wsjreddit

-mc

4

wbsgrepit t1_j9l5jvq wrote

IMHO, if they destroy 230 it should be applied well outside of internet context too -- hold owners responsible for people saying and discussing things on their property in general. To me it is equivalent to Walgreens being asked to be held liable for two jihadists walking in their parking lot while plotting or shouting about their POV from their grass.

It should clearly be OK for them to remove or kick them off their property should they choose but they should not be held liable (or expected to police) for the speech of another or not taking the action to trespass. 230 just reaffirms the normal and usual case of exemption that is enjoyed in the physical world on internet platforms.

15

UniversalMomentum t1_j9l4s37 wrote

We don't need AI to automate most things. AI will be for figuring out very big problem and won't be like proliferate in Everday products.

The limits now are not chips, it's definitely the programming. Quantum chips will only have specific uses and silicon will keep doing most of the stuff.

A super smart AI would be nice, but what we need far more is just lots of robotic labor/automation to lower the costs of everything and increase the standard of living once our economic systems catch up to the new reality.

You can probably automate the majority of jobs just with silicon/machine learning and good programming. Most jobs don't require the ridiculous amounts of computation you can get from quantum. Really the most useful thing a real AI could do right now is to replace the 98% junk code that's currently out there to actually get the most of the chips. That or solve all human behavior problems, but I'm not holding my breath any AI will ever be that smart.

−1