Recent comments in /f/Futurology

riceandcashews t1_j9pwgqo wrote

QM is relatively straightforward. The concept is this: particles don't actually have a position or spin or charge or mass or velocity. Instead there are different probabilities that we will observe a spin/charge/mass/velocity at various positions. There are 'dense' areas of probability where there is high likelihood to observe the particle/property and there are 'light' areas of probability where there is low likelihood to observe the particle property. You can think of these 'dense' and 'light' regions as crests and troughs of a wave. And just like water waves can interfere with each other (a big crest and a big trough cancel out in water, etc), so to can probability waves. As a result, instead of interacting 'classically' as objects, the quantum observations we make interact as waves of probability that can interact with each other like waves, resulting in all kinds of complex interference.

If that makes sense?

2

riceandcashews t1_j9pv5cx wrote

QM is relatively straightforward. The concept is this: particles don't actually have a position or spin or charge or mass or velocity. Instead there are different probabilities that we will observe a spin/charge/mass/velocity at various positions. There are 'dense' areas of probability where there is high likelihood to observe the particle/property and there are 'light' areas of probability where there is low likelihood to observe the particle property. You can think of these 'dense' and 'light' regions as crests and troughs of a wave. And just like water waves can interfere with each other (a big crest and a big trough cancel out in water, etc), so to can probability waves. As a result, instead of interacting 'classically' as objects, the quantum observations we make interact as waves of probability that can interact with each other like waves, resulting in all kinds of complex interference.

If that makes sense?

1

FuturologyBot t1_j9pv4r3 wrote

The following submission statement was provided by /u/Gari_305:


From the Article

>NASA is pressing ahead with its mission to mine metals on the moon, seeking to bolster the sustainable space travel market and set the tone for a growing space race with China.
>
>The space agency has announced a search for university researchers to explore using metal extracted from the surface layer of the moon in 3D printing and other material sciences technologies.
>
>The solicitation joins a growing roster of efforts out of NASA to leverage resources in space to avoid having to use more fuel from Earth.
>
>This kind of work conjures sci-fi images of robotic moon mining rigs feeding sophisticated manufacturing plants that can be used for repairing vehicles or building facilities for lunar operations.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11a4fsd/nasa_speeds_up_quest_to_beat_china_to_mining/j9pp9jd/

1

kompootor t1_j9puchn wrote

QC can't be developed past a fizzled-out tinker-toy if there's nobody willing to pay for it (there's a finite amount of VC out there, and they all want to believe there'll be returns before they die). There's nobody to pay for it if there's no viable commercial market. There's no viable commercial market if they can't even conceive of a business model.

(The government would fund QC for cryptography, sure, but meeting those requirements is many orders easier and cheaper than getting a generalized QC of the kind everyone's excited about.)

1

r3ga131 t1_j9puabi wrote

I don't think there will be a dictatorship. However, there are political movements that are actively censoring people which is reminiscent of fascism and Soviet Russia/China happening here in real-time in the west. Continuation of this censorship is going to start to cause the populace to revolt in anger and cause civil unrest. Ostracizing a demographic of people based on their values results in people turning to extremism new research shows. With the cost of living increasing exponentially and men not going to school at the same capacity the west will ultimately collapse unless policies are laid in place to make school malleable for men again. There are a lot more facets and focal points to address but the west is heading toward a decline with the destruction of the family ideology.

1

futuneral t1_j9psbdl wrote

You can make a paper airplane without knowing anything about fluid dynamics. It could be crappy. You try making different variants and finally arrive at a good design, and even come up with a folding formula for the best airplane. All without having to understand how fluid dynamics works.

"No one understands quantum mechanics" is a bit of a meme. Scientists have a good grasp on principles, the math there actually provides some of the most precise predictions we've ever seen. What's not known is "why" and "what does this mean". The "shut up and calculate" motto works really well for coming up with solutions for practical applications. The philosophy of it is lagging behind though. And many in the field only care about this while having beers at the bar on Tuesdays.

2

Gari_305 OP t1_j9pp9jd wrote

From the Article

>NASA is pressing ahead with its mission to mine metals on the moon, seeking to bolster the sustainable space travel market and set the tone for a growing space race with China.
>
>The space agency has announced a search for university researchers to explore using metal extracted from the surface layer of the moon in 3D printing and other material sciences technologies.
>
>The solicitation joins a growing roster of efforts out of NASA to leverage resources in space to avoid having to use more fuel from Earth.
>
>This kind of work conjures sci-fi images of robotic moon mining rigs feeding sophisticated manufacturing plants that can be used for repairing vehicles or building facilities for lunar operations.

3

CommentToBeDeleted t1_j9pn4ju wrote

>AI/robots are programmed to do whatever you tell them to do.

I dislike this statement, for a number of reasons.

First the obvious strawman argument. There was a time when people believed that certain races were "sub-human" and existed only to do whatever you tell them to do.

Second, many cultures believed (and some still do) that females should, at least to a lesser extent, be subservient to males, and the impact of that form of abuse was largely ignored, due to society viewing females as serving their intended function.

​

>Unless you give it some kind of human understanding of emotions and stuff....

This is the entire crux of the debate. Most people hear "programming" and think of it in terms of a very traditional sense. A programmer goes in and writes every line of programming, that a program looks at and executers.

While this is still the case for many (probably most) forms of programming, it is not the case for machine learning.

Essentially, some problems are too complex for us to tell a computer exactly what to do. So rather than give it a bunch of rules, we more or less give it a goal or a way to score how close it got to achieving the desired result.

Then we run the program and check its score, but instead of running it 1 time, we run it millions of times, with very tiny differences between each instance. Then we select a percentage of "winners" to "iterate" on their small change and have all of these "children" compete against each other. Then we do this millions of times. Eventually, we hope to get an end product that does what we want it to do, without a lot of negatives, BUT the "programming" is a black box. We really have no idea how it ended up doing the things it ended up doing.

Sure we could assign it rules, like "don't tell users 'I am conscious'" but that is no different than telling a slave "you can't tell people you have the same rights as them." Creating a rule to prevent it from acknowledging something, doesn't actually change anything.

​

>In my opinion, this shouldn't even be a debate.

Strongly disagree here. First, do I think AI is currently conscious? Probably not. Am I sure? Absolutely not.

The problem is that we don't really have a good way of defining consciousness or sentience. It's only recently that we've given equal rights to people of different races and gender. We have yet to assign really a really significant "bill of rights" to animals who demonstrate extreme levels of intelligence, more so than some of our young children who do have rights.

So I guess my question is this: Is it ethical to risk creating a "thing" that could become conscious, without having a way to determine if that "thing" is conscious, then put that "thing" through what could be considered torture or slavery by those whom we already define as having "consciousness".

I think the answer to this question should be no, it is not ethical to do that. I think the answer isn't to try and prevent people from not making AI though, I think we need to better define consciousness, in a non-anthropocentric way. Then we need to come up with a way to test whether or not something should be considered conscious, then assign it rights befitting a conscious being.

​

tldr: Most programs are obviously not conscious, but of these chat ai bots, we lack the proper definition or test to confirm whether or not they are. In my view, it's unethical to continue doing this and therefore we have a moral obligation to better define consciousness, so that we can determine when/if it has arisen.

−4

thislife_choseme t1_j9plt4j wrote

I wouldn’t trust any corporation about quantum computing. When a national laboratory or university publishes something then I will believe it.

A corporations motives are profit driven and highly untrustworthy where as government r&d projects are tested, reliable and pushing innovation to better society, sometimes for nefarious reasons but still more trustworthy than a for profit entity. And yes I know that the government ends up creating most IP and giving it to corporations to manufacture and sell, in not a fan of the shit neoliberal model.

0

3SquirrelsinaCoat t1_j9pkps6 wrote

With enough time and ink and paper, you could write down an AI. Do you give rights to a stack of math problems?

Yeah but the emergence, cause it's emerging, the room knows how to speak Chinese, it told me it loved me, this is the AGI revolution the movies promised us...

Nonsense. It's just fucking math, people.]

Edit: Take this gem from the article, and the expert by the way is a professor of media studies, not AI.

>These are rights related to these personal delivery robots, giving the robot the rights and responsibilities of a pedestrian when it’s in the crosswalk. Now we’re not giving it the right to vote, we’re not giving it the right to life. We’re just saying when there’s a conflict in a crosswalk between who has the right of way, we recognize the robot functions as a pedestrian. Therefore, the law recognizes that as having the same rights and responsibilities that a human pedestrian would have in the same circumstances.

So stupid. Those are property rights granted to the owner of the robot. The robot itself has no rights. The company has the right of way, like a pedestrian, and that's what the law recognizes. This guy is just going to add more confusion to a topic most people already misunderstand.

3