Recent comments in /f/Futurology

jdog1067 OP t1_jd6w1py wrote

For sure. A journeyman electrician would be knowledgeable enough to know how to do everything (edit: most things), but no one can know every code, it’s a continual learning process. If you have a question, the AI can cite the code and tell you what to do in plain language, then the electrician can look up the code that was cited, and read the language.

6

MaxMouseOCX t1_jd6vs2d wrote

Yes you could do that... However, remember when satellite navigation was new and in the news there were drivers ending up in lakes and what not saying "sat nav told me to go straight on, so I did! Not my fault!" - if you do something electrically and it kills someone "chatgpt told me to do that!" is not an excuse.

There are books and online resources to look up every code, practice and how to implement it, an ai to assist might be fun but definitely don't rely on a chat model.

24

why06 t1_jd6tef7 wrote

I know this isn't exactly what you mean, but AI is actually being used right now to help train AI. I think the OpenAI guys used it to supplement their human reinforcement learning.

Google Deepmind allows agents to compete against themselves in a type of natural selection process to produce AlphaZero.

Nvidia is using AI in limited capacity to design parts of it's latest chips.

So I know you probably mean on a whole different scale where the AI can take over the whole process of training itself, but in a limited way right now AIs already kinda are being used to build AI. And they are already speeding up progress because of these examples and more.

2

awcomix OP t1_jd6pala wrote

That’s a good point about including sentience. That’s why when it happens people will debate endlessly about what is sentience anyway etc. for the sake of this argument I will say that an AI’s self awareness would probably look unrecognisable to us. But would have to include the AI realising it can do things and then choosing to do certain things without outside interference.

2

meidkwhoiam t1_jd6ih4q wrote

'In the future' meaning whenever a robotics student decides to make it happen for whatever project. We've had robots that are more than capable of that kind of complex motion for a long time now.

Like sure it's not gonna be an industrialized machine making fabrics faster than a human feasibly ever could, but we have robots that allow surgeons to perform from across the country, and to perform operations that human hands are just not precise enough for.

My point is that this isn't a technology issue, it's that no one has been bored enough to figure it out yet.

1

KnightOfNothing t1_jd6ifp4 wrote

honestly there's not much that i as an individual could do as my access to resources is insignificant and i just not smart enough to put myself on the benefitting side let alone the side that'll be propelled to the top of society by it.

i think i would just be relieved that the singularity is happening so soon because even if my access to the technology it creates is limited/non existent at least my dreams/ambitions go from impossible to possible.

1

KnightOfNothing t1_jd69ky2 wrote

straight? it's nearly been a decade, how many decades do you want before you consider the possibility of bio-engineering babies? 2? 4? 7? eventually the health issues brought about by pollution will get much worse and the last generation to be impacted by them is going to blame everyone who was deliberately dragging their feet on this and they'll be right to do so.

1

Thelastosirus t1_jd63mfo wrote

You are literally describing the Lightsail satellite experiment that went into orbit awhile back. It's meant to increase orbit using the pressure from the sun based off the angle of the sail, sort of like a wind sail. Just to keep you from wondering it actually works!

2

goldygnome t1_jd61lhg wrote

If we survive it, the singularity will probably be given a date in hindsight. However, i doubt that living though it will be an overnight change. I think some people are mistakenly including sentient AI as a requirement for the singularity. In such a scenario sentience could suddenly appear and infect advance systems round the world overnight.

I don't see why the singularity couldn't occur over a period of time without computers becoming sentient and then one day it seems to be everywhere. Gradually, then suddenly as Hemingway put it. The main requirement for the singularity is that the future becomes unpredictable.

I would argue that what we are seeing happening now in the AI sector qualifies as a symptom of the singularity having started. At this point it might be wiser to start thinking about ensuring a soft landing for yourself for whenever the carnage arrives at your occupation in the next X years.

3

RushingRobotics_com OP t1_jd61h9l wrote

I see your points, but I am more concerned about the unequal distribution of accessibility and regulation of AI. I believe that there is no turning back at this point, and that technology will continue to advance regardless of our concerns and actions. To mitigate these risks, we need to democratize accessibility, develop open-source code, and prevent large companies from making exceptions for themselves when pressuring governments to regulate AI more effectively.

3

Badfickle t1_jd60qmf wrote

Wealthy people will continue to own AVs for personal use. The rest will migrate to primarily manufacturer owned fleets as it gets cheaper than owning.

1