Recent comments in /f/Futurology

ComfortableIntern218 OP t1_jcbs4kw wrote

I don't think it is reactionless. It sounds like they figured something out in the physics realm using electrons. They haven't said much about how it works for obvious reasons if they are selling it. They clearly figured something out if they got a company to partner with them and scheduled a rocket launch. I can't imagine why someone would waste that much money if they weren't confident.

3

Zealousideal-Ad-9845 t1_jcbore3 wrote

I'm a software engineer working in automation. I've never created a deep learning model before, but I know a lot about how they work. Here's my opinion. For the time being, every job is safe, unless it is incredibly mundane, repetitive, requires no creativity, and there are no high stakes for failure. AI and automation currently are only "taking" jobs by fully or partially automating only some of their tasks, decreasing the workload for human workers and increasing their productivity, and, in doing so, reducing the need for a larger workforce. So you can accurately say that AI, automation, and machines have put some cashiers out of work, but that doesn't mean there aren't still human cashiers. Just not as many of them.

That said, if "super" AI becomes a thing (I'll define SAI as a model with learning capabilities equal to or exceeding that of a human being), then literally no job is safe. Not a single one. If the model has every bit of nuance in its decision making as I do, then it can write the code, design the systems, review the code, address vulnerabilities and maintenance concerns, communicate its design process and concerns, and it can do all those things as well as I can. At that point, it is also safe to say they can take manual jobs too. We can already make robots with strong enough leg motors and precise enough finger servos to operate as well as a human, it's just a matter of making software that has coordination and dexterity and knows what to do when there's a trash bin fallen over in its path. And if AI reaches the level I'm talking about, it could do those things.

1

bound4mexico t1_jcbi0p3 wrote

>I'm talking about the ethics of humanity as a group.

There are no decisions made by the group as a whole, though, so let's just make more ethical decisions by outsourcing more of our contentious decisions to disinterested third parties.

>An uninterested party will be one who is not within the group(s) for which the ethics are being questioned.

No. It will be one who is (judged, fallibly, by humans as) least likely to be affected by the decision in question. A person may not be part of the group(s) yet, but could easily become part of the group(s), have a friend or family member that's part of the group(s) already, have a friend or family member become part of the group(s), or be affected by the group(s)' decisions.

>Even the idea of taking someone and separating them from humanity so that they could be uninterested could be considered unethical.

Of course it would be. But there's absolutely no reason to do this. There are no universal humanity-wide decisions under consideration at that level.

>You threw the word planets in there as if there are people from different planets, but that's entirely my point. Until there are intelligent creatures from other planets, we cannot fairly judge the ethics of humanity as an entire race.

There's no reason to judge the ethics of more than a single decision at a time, ever.

If you're designing ethical AI, that's just a measure of how much the AI conforms to a BI (usually a human)'s ethics. There are "wrong" ethical systems, for example, any ethical system that is self-inconsistent or inconsistent with reality, but there are many, very different, "not-wrong" ethical systems. Ethics are subjective, except in the obvious cases where they're self- or reality-inconsistent. Then, they're objectively wrong.

0

AcceptableWay3438 t1_jcbeicf wrote

Capitalism is based on buying things. If everybody will be poor except a small group of rich people, capitalism will collapse very fast. Im not scared of AI, because if the end of the world comes, we will face it together. And humanitys strength was always the word "together".

2

FuturologyBot t1_jcbcnqb wrote

The following submission statement was provided by /u/ComfortableIntern218:


SS: I remember seeing something about this technology last year. Making a bold claim is one thing, but actually spending millions to go to space is another. I see they have a launch partner, so they must have something because companies don't just send things up on multi-million dollar launches for fun. If this thing actually works as intended, it could change space exploration. It's about time we get excited about space again. I wonder what they intend to do with this besides just Earth orbit missions?


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11s12p9/ivo_ltd_to_launch_quantum_drive_pure_electric/jcb6zgr/

1

[deleted] t1_jcb8w6e wrote

I think coaches for highly technical sports will not be automated. I play table tennis competitively and definitely feel like we're not going to AI that can fix very specific issues with motions that really only a handful of coaches can really see in any given large city.

1

Houston_Here t1_jcb88a9 wrote

Highly verifiable too. If it is large enough the orbital tracking data will show dV quite clearly even if it is over a very long duration. I am excited but quite apprehensive. If the thing actually accelerates this will be very big news.

2

ComfortableIntern218 OP t1_jcb6zgr wrote

SS: I remember seeing something about this technology last year. Making a bold claim is one thing, but actually spending millions to go to space is another. I see they have a launch partner, so they must have something because companies don't just send things up on multi-million dollar launches for fun. If this thing actually works as intended, it could change space exploration. It's about time we get excited about space again. I wonder what they intend to do with this besides just Earth orbit missions?

1

AutoModerator t1_jcb6cmq wrote

This appears to be a post about Elon Musk or one of his companies. Please keep discussion focused on the actual topic / technology and not praising / condemning Elon. Off topic flamewars will be removed and participants may be banned.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

0

Shadowkiller00 t1_jcb5xer wrote

Oh I see, we're talking about different things here. I'm talking about the ethics of humanity as a group. Since all humans are in that group, there is no such thing as an uninterested party.

You appear to be talking about the ethics of smaller groups such as businesses, countries or individuals. An uninterested party will be one who is not within the group(s) for which the ethics are being questioned. Even the idea of taking someone and separating them from humanity so that they could be uninterested could be considered unethical.

You threw the word planets in there as if there are people from different planets, but that's entirely my point. Until there are intelligent creatures from other planets, we cannot fairly judge the ethics of humanity as an entire race.

1