Recent comments in /f/singularity
F1nanceGuy217 t1_jd2apz4 wrote
Reply to Replacing the CEO by AI by e-scape
Probably going to get downvoted for this take …
The CEO job description you’ve laid out isn’t correct and imo would not be as easy for an AI to take over. Every CEO I’ve worked for in the insurance and finance industry in my 18y career works as hard or harder than everyday people - pushing goals, targets, asking the tough questions of the team and setting expectations around the clock. No one is twiddling thumbs and making $20m+ per year. they also typically remain accountable internally and externally.
CEOs are many times the face of a company (ie Delta Airlines CEO Ed Bastian is pretty well known by travelers) both internally and externally. In small and medium orgs, they typically participate in high level negotiations with key customers and suppliers. They are the liaison between the company executives and the board members on all important company matters. They are high profile members of the community and engage with other organizations.
You might be able to break their duties all down for an AI into small tasks, but at the it might seem very disjointed and not cohesive.
I do agree CEOs and executives tend to be overpaid for their duties though. They might be able to point to a “deal” they did that “brought in $500m extra revenue”, but who’s to say someone else in that role couldn’t also do that same thing for less money? I don’t know how the value of a CEO really differs if they’re paid $5m vs $20m+. It’s like hitting the lottery Year after year after year.
photo_graphic_arts t1_jd28v5l wrote
Reply to comment by leywesk in Replacing the CEO by AI by e-scape
You've never heard of a worker-owned corporation? Costco doesn't ring a bell?
VelvetyPenus t1_jd28bn6 wrote
Reply to A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
Of course death wouldn't make sense. They want slaves, or near zero-cost labor. Ever hear of sweat shops?
Spreadwarnotlove t1_jd27woo wrote
Reply to comment by TheSecretAgenda in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
Well in that case it'd be self defense and they'd be right to. But I don't see that happening. At worst some commies will try to overthrow the business owners, over estimate their public support, and be locked up.
alphadom4u t1_jd27dpl wrote
Reply to A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
Step one would be disarming the poor to minimize resistance. This has already been done pretty much everywhere on earth with the exception of American, Africa and now Afghanistan. I would expect it to happen in America fairly swiftly over the next decade because of the wealth concentrated in that region.
I would not expect a mass culling based on net worth or skill set, but culling individuals based on behavior analysis is quite likely. When predictive AI is skilled enough to identify individuals with mental health issues early, mandatory treatment will become a thing. It would also make sense if the people that can't be cured with 100% reliability would "disappear".
Spreadwarnotlove t1_jd24ykz wrote
Reply to comment by xt-89 in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
Pretty much. The rich will push for the human colonization of the galaxy and they are going to need trillions upon trillions of people just to properly colonize the solar system.
BlessedBobo t1_jd246uq wrote
Reply to comment by Education-Sea in Replacing the CEO by AI by e-scape
this is not how it works, the workers in general would not see a penny.
natepriv22 t1_jd2245c wrote
Reply to A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
https://foreignpolicy.com/2012/02/27/were-all-the-1-percent/
There are so many factors no one in this subreddit is considering tbh. For example, its somewhat ironic because American wealth of most citizens is probably and comfortably in the 10% of the world, and at least 10% of Americans likely fit in the 1% of the world. This is a very American or western centric post and also comments. There's nothing wrong with that, except it's totally biased and makes most of the purpose of this discussion null.
Not to mention the fact that for most people in the world, the barrier to entry into wealth is actually authoritarian governments who limit or ban the market approach. Thus never giving any "common person" the opportunity to rise above the poverty line. A lot of countries also never get the opportunity to industrialize properly, and remain agricultural based economies which are understandably poorer than industrialized or service based economies.
Did you know that China's middle class represents more than 50%+ of the population? It's quite ironic again, that people are always shouting "class war" when the majority is the middle class, who represents an in between of rich and poor. In most western countries if not all, the middle class is 50%+. It's not hard to imagine considering that 3% of China grew to 50% in 2018, that this is a global movement, and the rest is likely to happen in other countries such as India and in Africa and Latin America more generally.
https://chinapower.csis.org/china-middle-class/
But you've forgot one element that is most important. Historical wealth. If you jot down a line of today's 8 billion people we would own more than 99% of global historical wealth. Yes today, all 8 billion people are in the historical 1% of history.
Smellz_Of_Elderberry t1_jd1wpsa wrote
Reply to comment by Eleganos in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
I somewhat agree.
But fun fact, CEOs and senior executives are disproportionately more likely to exhibit psychopathic traits.
Also, I agree that not everyone would be killed in such a scenario. Someone would get lucky. There are people who only eat food they grow themselves. An extreme minority, but they would survive. There are also tribes of humans which receive very little contact with the outside world. Plus, plenty of people have bunkers, and are prepared for, at least, some of the scenarios, and some might just be immune.
>And hence would have the resources along with motive to give the murderous monster their just desserts.
This is dependent upon it being just one person. What if it's a large cult? A rogue nation. They don't need to wipe everyone out, if it's anywhere near 90%, and they aren't affected? They just outright win all future conflicts. Just picking up all the bodies would take years.. Let alone getting things like food production and other essentials going. Not to mention the loss of human skills.
Agi could accelerate a recovery dramatically, but the rogue group would also utilize it.
Smellz_Of_Elderberry t1_jd1uyjv wrote
Reply to comment by IluvBsissa in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
Would they start noticing large increases in cancer rates traditionally only found in the elderly, suddenly being found in younger and younger populations? Or a massive decrease in, say, fertility rates?
Lol. You have a lot more hope in humanity than I do. I think that anyone who brought such a scheme to light would be silenced by social pressures.
I don't know much about the politics of science, but I imagine that the individuals who actually did bring such a scheme to light would become a pariah in the community, and they are smart enough to realize it. Most people just go along to get along, they aren't going to take the risk that they're wrong and be known as the crazy conspiracy theorist for the rest of their career.
But maybe you're right. I suppose it depends on whether you think society values truth more than popularity. Many people fear social rejection more than death, sometimes even seeking death as a solution to social rejection... If the social pressure says "keep your mouth shut, and don't question xyz" the extreme majority will not question it and even go out of their way to shut down others who do.
Why?
Because fitting in is human nature.
When our ancestors saw the rest of the tribe running from something, the ones who stayed behind to see whether running was the right decision or not were removed from the gene pool.
ThrowRA_overcoming t1_jd1toiq wrote
Reply to comment by Education-Sea in Replacing the CEO by AI by e-scape
Unlikely. People will earn based on their market value, along with some other factors like negotiation skills, etc. Profits would veyr likely get passed to shareholders.
ThrowRA_overcoming t1_jd1tlbm wrote
Reply to Replacing the CEO by AI by e-scape
LOL. AI isn't very good at context. Being at the helm of a business is all about understanding context. AI may help with knowledge synthesis, providing options, estimating outcomes, but it won't help with broader understanding for some time still. Someone is watching that AI, guaranteed. The rest is marketing.
That chart also contains clear errors, doesn't give off a very trustworthy vibe.
LambdaAU t1_jd1rpra wrote
Reply to comment by Impressive-Menu-2120 in How long till until humanoid bots in supermarkets? by JosceOfGloucester
But supermarkets have already had shelf-stocking vehicles for years. There is no need to make a humanoid robot for navigating supermarkets to restock.
Honest_Science t1_jd1r2w4 wrote
Nobody wants to hear this, but a humanoid robot in a standard environment will generate terabytes of sensoric data per second. A human body has more than 70b nerve cells firing once per second. A current architecture AI Modell will learn a few months to just generate the foundation Modell for operating the robot. Walking in a room, opening a door etc.
Borrowedshorts t1_jd1qkg4 wrote
Reply to comment by TheSecretAgenda in How long till until humanoid bots in supermarkets? by JosceOfGloucester
Not happening. This entire project was derived from work on legs. I'm not a fan of the backwards leg design myself, but they've had this project ongoing for 10 years, and for over half of that time, they didn't even have an upper body.
Eleganos OP t1_jd1q2oq wrote
Reply to comment by FacelessFellow in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
-
So can literally any world government. Except they have bioweapons facilities and top tier scientific minds in hand in addition to fuck you money.
-
No they can't because this isn't an RTS game where you spend money in one place and robots appear at your army base. You need logistics, supplies, factories, ecetera. You need places to store them, auxiliary facilities to power them and do maintenance. You need to aquire weaponry and ammo for them. All of this is EXTREMWLY conspicuous ans ther isn't a government alive which would let one rich person aquire enough military might to potentially stage an internal coup, much less wipe out humanity.
-
Culling people wouldn't do jack for climate change, leatively speak. We're already on track for things to get bad if we dropped now, and I don't see us getting to murderbot territory next year simply because, even if we cracked the coding tommorw, you need infrastructure built and suppline lines arranged to actually start building the damn things.
Eleganos OP t1_jd1pmnd wrote
Reply to comment by IronPheasant in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
As I've said in many other places
This is not me giving a scenario
This post was not me failing to acknowledge other potential ways things could go sideways.
This was me SPECIFICALLY addressing one scenario that a small minority of people on this sub KEEP INSISTING WILL HAPPEN.
And exists basically to refute them in full.
I'm not going to pretend it was perfect. But that was not my intent. I gave six reasons why it wouldn't, some better than others, and that's basically all I intended.
Look at the length of this post and ask yourself "how long would it have been if it'd covered EVERY possible scenario and angle" and you'll see why it's scope is so limited.
I feel like I just wrote a Sci fi story and yet people are asking where the elves and dragons are.
I apologize if I come off as a bit aggravated. I just thought my post was sufficiently self evident, and am just a bit frustrated that people keep thinking I've ignored other scenarios or misunderstood/don't understand something or other.
IronPheasant t1_jd1j9qa wrote
Reply to A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
You're thinking about things in terms of a movie, not in terms of reality. Things don't happen in one big climatic event, they're a chain of smaller events. The unthinkable is unthinkable until it isn't. It's a process, not an event, as they say.
So in this case the murder dogs and venom bees don't show up on the front end of the apocalypse. They're cleanup on the back end. Of a disastrously bad timeline where an Epstein or fascist cult manages to maneuver themselves into a dominant position.
At the end of the day, everything is about power. Being able to replace people with robots obviously dramatically tilts the scale even further in favor of capital over labor.
As you say, "why would they want to get rid of people" can just as easily be flipped to "why would they want to help people."
We live in a happy world where "let them die" is a common attitude towards healthcare and the homeless, and those in positions in power are 100% happy to keep things that way, or make it even worse.
Technology has made things better than the past and I believe it's the only way to avoid dooooom for the future, but I understand nothing is a guarantee.
Malice and neglect, the yin-yang vital essence of the conservatives and liberals, will be the start of social change. How it will end, no one knows. Not me, not you. Even Ray Kurzweil thinks SGI has a 50/50 shot of being "good" for humanity as a whole, and he always notes that he's considered by many as something of an optimist.
Always remember the spirit of the anthropic principle and survivor's bias: things only worked out so incredibly well in the past only because they had to for you to be here to see it. The dead and those without the leisure and means to casually chat on internet forums, might have other opinions.
>Like, what do the actually gain from it?
> More land?
> More resources?
> They can get infinite of both. And could kick anyone they wanted dout of their land at a moment's notice.
Uh... yeah? They could have all the atoms by kicking everyone off of their planet?
Here's the rules for rulers video so you can learn more about how power structures work, and how the elites do very much think of themselves as a collective in-group. tldr: They're pirate ships that need to constantly acquire more loot and rents to feed themselves. If you're not expanding, you're shrinking.
mescalelf t1_jd1j943 wrote
Reply to comment by Surur in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
And America is presently suffering from outbreaks of eugenicist rhetoric, actual neonazi movements, kleptocracy.
And America is home to by the three most successful AI-development teams (Google, OpenAI, Microsoft).
And America is already working on enormous offensive drone swarms
Not that drone swarms are by any means the only tool they could use.
Borrowedshorts t1_jd1iyx0 wrote
Reply to Replacing the CEO by AI by e-scape
It wouldn't be a high bar to meet and most CEOs are just glorified figureheads anyway. Would enable a ton of cost savings and likely better decision making if they were replaced.
FacelessFellow t1_jd1httt wrote
Reply to A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
It just takes one rich guy right?
With the right connection they can develop a virus.
With the right amount of money they can buy a massive robot army.
Climate change would be the main reason to cull the global population. If you were infinitely wealthy and humans were infesting your only planet, you might be inclined to deal with the infestation.
mescalelf t1_jd1ex14 wrote
Reply to comment by claushauler in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
> “I can hire one half of the working class to kill the other half.” — Jay Gould
For those curious
mescalelf t1_jd1eale wrote
Reply to comment by theNecromant in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
Particularly if they convince people it’s not AGI and then consult the AGI for assistance in keeping it under wraps.
leper99 t1_jd1duz3 wrote
Reply to comment by GlobusGlobus in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
"I showed you my singularity. pls respond"
NazmanJT t1_jd2bch2 wrote
Reply to How long till until humanoid bots in supermarkets? by JosceOfGloucester
Surely there is lower hanging for bots in a retail environment first. For example using bots to replace baristas will surely be easier than shelf stacking. I appreciate that technically this is already possible but it has not gone mainstream yet and won't until it's a lot cheaper and more standardised. When do people expect a mainstream replacement of baristas with bots?