Recent comments in /f/Futurology

marsrover001 t1_ja922c4 wrote

Think of it like asbestos, it's perfectly safe as long as you don't mess with it.

Don't lick the walls, or demolish one (making dust) and you are fine.

In regards to the solar panels, I'm sure manufacturing is mostly automated and since we know the hazards of lead very well I'm not concerned there.

From a consumer perspective the wafers are behind a sealed glass panel, you'll never touch them and they will never touch rain.

Recycling is same as manufacturing, we know the risks and can manage them well.

3

DoItYourSelf2 t1_ja9206e wrote

Yeah, when they were caught ignoring pleas from academia to do something about the lead in Michigan I pretty much wrote them off.

This didn't even seem to get much press, I think I saw a Frontline on it.

Same BS with the FDA. Our government is just about hopelessly corrupt and/or inept.

46

vwb2022 t1_ja91ff5 wrote

The title of the post and the article is misleading. The issue discussed is not that AI needs to understands consequences, it's that AI can't differentiate between correlation and causation. Which it can't because it's not intelligent, it's a correlation-finding algorithm. It's working as intended.

Researchers just discuss the need for new models, because current models are not "smart" enough and will need to be replaced with something new that will be able to differentiate between correlation and causation.

TLDR; Article discussed flaws of current AI models, rather AI needing to understand anything.

17

chasonreddit t1_ja91era wrote

While I love the idea, I have to ask. I've never actually run the numbers, but wouldn't asteroid mining be even more efficient?

Let's ignore the time factor for a Holman orbit to those asteroids and assume we are thinking long term. It seems the total energy outlay (which is really what all space travel is about) is much smaller per tonne to move a small asteroid into lunar orbit or HEO than it would be even to boost it from the moon.

And they come in all flavors: carbon based, ferric, silicates, ice, probably heavy metals, but we really haven't looked for them yet. What do you need and put 10,000 tonnes of it into orbit.

1

rherbom2k OP t1_ja912p3 wrote

The article explores the significance of integrating causality into machine learning algorithms and how it could impact different fields, including medicine, robotics, and natural language processing. By enabling machines to comprehend cause and effect, they would be better equipped to make informed decisions, learn more effectively, and adapt to changing situations. In medicine, for instance, integrating causality could aid in discovering new and improved treatments for ailments, creating new diagnostic tools, and personalizing treatment for patients. Additionally, integrating causality into robots could enhance their ability to navigate their surroundings, while in natural language processing, it could ensure that algorithms generate coherent and factually accurate text. With the continued advancement of causal inference, the potential applications of this technology are extensive and diverse. By providing machines with a comprehension of causality, researchers could unlock new prospects for artificial intelligence, resulting in a future where machines are more capable and versatile than ever before.

1

gobbo t1_ja90xct wrote

And yet, some laws are trending toward universal without enforcement by a monopoly on violence. There's a global alignment under way, which is hopeful considering that we are struggling with planning alignment strategies for impending AGI.

For instance, the elimination of slavery proceeds, with a few holdouts like prison systems and regressive states. Incest rules and age of consent rules are becoming more standard. Fraud rules and awareness of conflicts of interest are becoming increasingly prevalent.

These are arguably the international effects of humanism riding the coattails of trade, and wars being won by democratic governments, but there's also a zeitgeist related to the growth of universal education, I think.

3

Psychomadeye t1_ja9013o wrote

And we've seen exactly this in the second industrial revolution. The quality of life and recovery time from technological unemployment improved dramatically right up until the great depression. The depression itself is where real financial technology killed banks that didn't know how to use it and the fallback was the gold standard causing the biggest monetary contraction in US history. In the third industrial revolution (now) we've seen multiple recessions but the great recession, which was approximately half as bad as the great depression, dissipated in just about two years. Then there was Covid quarantine that lasted one year. Now people are seriously considering a 4 day work week right after a bunch of corporations stick with WFH because trials have shown an increase in revenue even providing the same pay and benefits. In one or two generations they might be talking about a 3 day work week or short shifts. We're also seeing major issues being addressed like never before. We have unemployment insurance, medicaid, medicare, social security, section 8, snap and a few other things that I've definitely forgotten. The US could spend more on these things but isn't right now because political games. At the end of the day though, the best political strategy is good policy. Right now I'd imagine the biggest challenge we have is climate change and we will need to bring every worker and technological advancement we have to bear on that one for the next one hundred years if we want to make it.

1

cronedog t1_ja8zrl2 wrote

>1) provide a common moral language that facilitates intercultural, interethnic and interfaith dialogue and conflict resolution,

Why do you think this will help? Most societies and cultures frown upon murder, theft and assault, that doesn't stop them from occurring. Also, many cultures and faiths are incompatible.

2

zachster77 t1_ja8z9eb wrote

Are you sure either of those situations are in conflict with my point?

While the COVID vaccine was free to the public, the pharmaceutical companies were paid by various governments, resulting in record profits. All while much of the work for developing the vaccines was made possible by public funding.

For cancer treatments, look at this release:

https://www.globenewswire.com/news-release/2022/01/21/2370991/0/en/Oncology-Market-Size-to-Hit-US-581-25-Billion-by-2030.html

Cancer is discussed like it's a natural resource the medical sector can mine for profits. And while ultimately it's a good thing that lives are being saved, the financial pressures to attain that salvation is devastating for many people. I'm sure you know that medical debt is the number two cause of bankruptcy (in the US at least).

1

DxLaughRiot t1_ja8xz7d wrote

Very fair. Maybe if the statement was that people have lost faith in current tech firms to solve these problems? Or maybe only some of them like meta in particular. I don’t think Meta is about to fix anything but AI from new tech firms might.

Maybe the better statement is that any tech firm that has arrived at the Desert of the Virtual is on course to its own demise - though I guess that’s pretty circular reasoning. Any company that isn’t solving real world problems is going to start failing as a company seems like a pretty redundant take on capitalism

1

billtowson1982 t1_ja8xpvl wrote

Reply to comment by net_junkey in So what should we do? by googoobah

They're only better in the sense that Google circa 2004's answers were better than the average humans - both had access to an extremely large database of reasonably written (by humans) information. ChatGPT just adds the ability to reorganize that information on the fly. It doesn't have any ability to understand the information or to produce truly new information - two abilities that literally every conscious human (and in fact every awake animal) has to varying degrees.

1