Recent comments in /f/Futurology

Maxwellsdemon17 OP t1_ja7bwdg wrote

"Gradually, a certain sense has been percolating in Silicon Valley that might be described as a “strange shrinking of the Utopian consciousness,” to quote the philosopher Theodor W. Adorno. Just a few years ago, former Google CEO Eric Schmidt could still profess a belief that the right approach to technology could “fix all the world’s problems.” Mark Zuckerberg could still argue somewhat credibly for the potential of “connectedness” to fight climate change, pandemics, and terrorism, and the media could still enthuse about “Facebook Revolutions.” By now, confidence in those dreams has eroded. After all the disappointed hopes, deluges of fake news and hate speech, whistleblower revelations (including those from Christopher Wylie and Frances Haugen), and various antitrust lawsuits, it’s clearer than ever that tech firms have not found the answers to society’s problems, if they were ever looking for them in the first place. In fact, their surveillance-capitalist practices have frequently meant that they themselves are a problem. In this sense, the metaverse might be seen as a logical progression: if you can’t solve problems in the real world, why not create a new one without any? Perhaps it’s not actually the users who are fleeing to the metaverse, but the tech companies themselves.”

13

net_junkey t1_ja7ar08 wrote

Reply to comment by billtowson1982 in So what should we do? by googoobah

#2 have you talked to people? ChatGPT's answers are as good or better then the average person's. Not to mention this is after it got lobotomized to not give answers that can be considered offensive or that sound like the AI has personal oppinions.

1

FuturologyBot t1_ja7a917 wrote

The following submission statement was provided by /u/euronews-english:


According to new research, AI may be able to automate about 39 per cent of domestic work within 10 years.

In a world where unpaid domestic work currently takes up almost as much time as paid work, automation could have significant social and economic implications.

For instance, these new technologies will most likely be only affordable to wealthy or middle-class households - giving them even more time for paid work and leisure - and might even pose a threat to some low-income professions by reducing demand for domestic workers.

Since experts suggest that the potential benefits are stronger in housework than in adult care, the risk of AI taking over care professions is minimal, the study found.

On the other hand, poorer households unable to afford this technology would be left spending more time on domestic work, further worsening economic and social inequalities.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11d8q10/robots_could_do_39_of_domestic_chores_within_10/ja7818z/

1

just-a-dreamer- t1_ja79yrd wrote

Humans are screwed then, for their brains are fairly limited. Yet we manage somehow

I believe AI will optimize it's data over time and learn on the go. Besides, the workflow is designed for human hands and brains, not for AI.

It might be more reasonable to have no TPS reports at all as an example and come up with something that is better suited to AI capabilities.

1

hervalfreire t1_ja79j62 wrote

Reply to comment by boersc in So what should we do? by googoobah

Machine Learning (“mass training”?) didn’t exist 40 years ago. Cases like the tank one you described used a completely different technique that didn’t utilize RNNs or the like. Other than hardware capabilities, there’s been a big number of breakthroughs in the past 2-3 decades or so, from LSTMs to diffusion models and LLMs. It’s 100% not even close to what we did back in the 90s…

2

Psychomadeye t1_ja796kk wrote

You'd be surprised how small the training space is and how far outside a human reaches. We're talking a litter box to a football stadium in difference. And humans know the difference between true and false vectors, but an AI won't. If there's a chance to policy, you'll need years to retrain that model and will need to somehow find a dataset to use for that. You can't just ask it to use new cover pages on the TPS reports. You need to show it a million TPS reports with those cover pages and hope it generates them properly. Even when you don't create something new, the ability of these models to give you exactly what you want and actually have it work is extremely limited. And again, in order to address these limits, we need infinite space in a finite space, a time machine, or computers that fit inside an atom.

2

hervalfreire t1_ja793k1 wrote

Reply to comment by Enzo-chan in So what should we do? by googoobah

It always sounds more credible, as things progress. We’re still VERY far from a singularity or AGIs, the best computers can do today is language models (something we already know and do for decades), just faster/larger ones.

Yes, we’re about to see a big impact in professions that mostly rely on “creativity” and memorization, but I’d not worry about a “singularity” happening any time soon.

1

Orc_ t1_ja78be7 wrote

So you want a complete human slave... The poor thing has no choice but to love you unconditionally because you predetermined so through genetic engineering I suppose. wow.

1

euronews-english OP t1_ja7818z wrote

According to new research, AI may be able to automate about 39 per cent of domestic work within 10 years.

In a world where unpaid domestic work currently takes up almost as much time as paid work, automation could have significant social and economic implications.

For instance, these new technologies will most likely be only affordable to wealthy or middle-class households - giving them even more time for paid work and leisure - and might even pose a threat to some low-income professions by reducing demand for domestic workers.

Since experts suggest that the potential benefits are stronger in housework than in adult care, the risk of AI taking over care professions is minimal, the study found.

On the other hand, poorer households unable to afford this technology would be left spending more time on domestic work, further worsening economic and social inequalities.

4

Kaz_55 t1_ja77uby wrote

>You don’t even need a rocket. You can use a mass driver to fling the material from the moon to earth’s orbit.

That's not how orbital machanics work. You will need some sort of independant propulsion system to actually slow it down and circularize the orbit. You can't just "shoot stuff into orbit".

Oversimplifications like this are exactely why this is basically "a loony idea" and not going to happen in the forseeable future. And neither is there a "$100-billion-plus lunar economy looming" as the article claims, because there is no basis for such an economy.

1

DonBoy30 t1_ja7768x wrote

collapse of economic and political systems in the west (USA specifically) isn’t something I fear. Knowledge and humans don’t vanish overnight. This country specifically and historically has endured a lot of turmoil under worse living conditions.

Climate change is an issue, but climate change is also an issue that wont effect the globe evenly.

Honestly, I think it all comes back around to our economic model being antiquated for the current technological boom that is seemingly on the horizon, and that creates much of the anxiety we feel towards things like automation and AI. The road to liberating humans from being coerced into passionless labor by automating our mode of production will definitely be a net positive for our species, just as domesticating plants were 12000 years ago. The road to get there will be difficult, but humanity moves forward.

Dont live your life in fear of the unknown. Use this time to aspire to a quality education and be skillful in the many aspects surrounding human life.

2

Rondaru t1_ja76rqv wrote

At a concentration of 1000 parts per million in the soil you'd have to move and kick up a lot more destructive regolith dust than you'd ever hope to get enough gas out of it to protect the excavator from that dust. Not to mention that you're just utterly wasting the most precious resource of all on the Moon.

2

billtowson1982 t1_ja76gf1 wrote

Possible collapse: If your concern is that you don't want to live to see society regress to a primitive state - don't worry, you won't. We have 8 billion (and growing) people who depend on modern, often just-in-time supply chains for life. In a collapse, with those supply chains gone and people with guns wanting to eat most people will die, not live to see how it plays out (and that's not even addressing whatever causes the collpase in the first place).

Climate change: It's not the greed of powerful people that is the main driver of climate change - having billions in a bank account (or actually as a number that reflects nothing more than ownership of large percentages of the businesses that produce the goods and services we all use), doesn't mean one burns a ton of fossil fuels or farts out a lot of methane. In reality, it's consumption, and the massive fossil fuel use that enables our current many order of magnitude higher level of consumption per person, plus the multiple orders of magnitude larger world population than in the past that drives climate change.

Also climate change is being addressed. Quickly enough to save most non-livestock major mammals? No. Quickly enough to prevent 100s of millions of currently poor people from dying and being forced into refugee status? No. But quickly enough to allow most kids in comparatively well-off countries to live basically normal lives plus a house-destroying disaster or two? Yes. The quicker the better though - every delay costs everyone.

Economy: No, temporary shocks like COVID and the Russian war on Ukraine aside, the economy is not getting worse - whatever you or your friends/family may feel like. The economy is humming along quite nicely, actually.

AI: Your first worry about AI - because it's already happening now - is how manipulated you and everyone else will be (and in fact already is) by AI that is trained to engage you in order to sell you shit. As it turns out, the best way to engage people is to make them angry, anxious, sad, vengeful, etc., and to engage them in conspiracy theories and lies. This is also increasingly used to define your politics for you (and for everyone). The worst of it is that AI is turning you (and everyone) into the worst version of yourself - and addicting you to rage, fear, conspiracy theory, etc. for the primary goal of just selling you on more shit (i.e. the same consumption that drives climate change).

Your second worry about AI should be how it will replace a ton of jobs including in fields like art that most of us really wish would remain human, and that replacement will cause massive social disruption.

Your third concern is that you are young enough that you might live until the development of AGI, and then that will probably be the end for humanity.

4

just-a-dreamer- t1_ja75iqv wrote

And?

The vast majority of humans also can't work outside their training data. The number of people that truly create something new in their field of choice is limited. The majority does not work in a managerial capacity.

It might feel different in tech for job description change like every 2 years. But even there most workers don't create something new and unique.

Narrow AI does not have to wipe out a profession completly, it is good enough to replace like 70% of the workforce to cause serious trouble.

Unpaid student loans, mortgages, car loans, child support, taxes, social security, insurances, health insurance...

Firing just 10% of white collar professionals in a short perioid of time would crash many layers of the financial pyramid.

2

billtowson1982 t1_ja74jxn wrote

Reply to comment by [deleted] in So what should we do? by googoobah

1.) The idea that humans will ALWAYS be economically productive despite all possible future technological developments is just as much blather as saying in 1950 "in all of history humans have never gone to space, so they never will!" Whether AI ever develops to the point of being able to do all jobs better than any human, I don't know. But the possibility can't be ruled out, and certainly not by "it didn't happen in the past so it never will!"

2.) Strengthening your moral and ethical character is a good thing to do. But it's silly to believe that that is the way to get ahead in a career - a weak moral character can be as much an asset, maybe even more of an asset, to a person's career as a strong one.

0