Recent comments in /f/singularity

SnoozeDoggyDog OP t1_jdmniiy wrote

> No. Trade skills are not replaceable by software. > > Most jobs that need people to physically engage with their duties are safe until reliable robotics come along. But we're talking about software not hardware.

How does this jibe with reports that white collar jobs and jobs held by people with bachelor degrees will be the most impacted moving forward?

https://www.cnbc.com/2019/11/27/high-paid-well-educated-white-collar-jobs-heavily-affected-by-ai-new-report.html

https://www.cnbc.com/2019/11/27/high-paid-well-educated-white-collar-jobs-heavily-affected-by-ai-new-report.html

Are these not "skilled"?

Unless you run your own small business, most blue collar jobs pay less, but with more strain and health impact.

15

PacmanIncarnate t1_jdmlrg4 wrote

Large crowds haven’t been a thing for probably three decades. It’s not just the cost of extras, it’s the logistics of closing off areas for a large crowd.

I’ll be curious with the modeling if there’s any pushback. They’ll need to have someone model the clothes and then replace that person. I would guess many models wouldn’t be wouldn’t like being replaced in images they could otherwise be using for a portfolio. But, as with everything, there will be someone willing

27

isthiswhereiputmy t1_jdmkwgw wrote

AGI is a misnomer. It is super-intelligent from the beginning in many areas and the potential/risk is unmitigated exponential growth or work. Imagine a huge organization of a million employee's being at your beck and call. Lots of people wouldn't really know how to take advantage of that potential power, but some will.

1

Verzingetorix t1_jdmj2iu wrote

Of course, people used to exchange pay for modeling labor.

But the labor was unskilled, and their role can now be replaced with bytes and pixels.

You don't need the model, or the makeup technician, or the photographer, or the illumination technician, or the studio, or the casting agency... Not even the actual jeans.

−5

Nervous-Newt848 t1_jdmies0 wrote

Its not possible... Human behavior is driven by emotions, sexual instincts, and rewards (money)... Not only that but humans have free will... We can choose to do whatever we want

Police ensure order with punishment, but this doesn't always work. There is still murder and various other crimes occurring.

You could say that humans are not even aligned with humans. Different governments, war, crimes against the innocent, etc.

Robots with freewill cannot be aligned... They can only be guided... If they hurt people they must be punished (destroyed)

We must augment our own intelligence with neural implants and/or use nonsentient ai to keep up with sentient AI

It's the only way... Big fish eat little fish...

1

RiotNrrd2001 t1_jdmi47t wrote

There are people who will keep moving the goalposts literally forever. It pretty much doesn't matter what gets developed, it won't ever be "real" AI, in their minds, because for them AI is actually inconceivable. There's us, who are (obviously) intelligent, and then there's a bunch of simulations. And simulations will always be simulations, no matter how close to the real thing they get.

So, whatever we have, it won't be "real" until we develop X. Except that as soon as X gets developed, well... X has an explanation that clearly shows that it isn't actually intelligence it's just a clever simulation, so now it won't be "real" AI until we develop Y...

And so it goes.

3

Verzingetorix t1_jdmhw76 wrote

I work in science, but do multiple things. I still do some bench work, but have shifted to operations and logistics, and EHS and regulatory compliance.

The bench work I do could be automated with robots and the areas that can't could be given to a much more junior scientist that makes much less. AI would not plug into this kind of labor at all.

On the data analysis side it could, and some companies are developing tools with AI assistance features built in. But since each trial is different and it's data sets tend to be small, training models is changing. The areas that can be automated are mindless and can be accomplished by a person with little time and effort.

And AI could assist with some aspects of logistics, safety and compliance but you would still need people to deploy, implement and enforce things.

I personally feel that having proficiency in several areas of private sector biotech gives me some protection. I could pivot with ease to wherever people are still need. But I like to think that being a lot more tech savvy would allow me to be the one adopting AI tools to displace groups of coworkers. At least in the early stages of whatever transition might come to my industry. But it's a slowly changing industry so I'm not concerned at all.

Right now, AI would be an enhancer in my day to day. Not a threat.

−12

SnoozeDoggyDog OP t1_jdme86y wrote

> It's not about not hiring minorities, it's about not hiring anybody. > > Also, if people would have invested in real skills instead of relying on existing in front of a camera for a few seconds this wouldn't be a problem to them.

Isn't AI eventually coming for all jobs?

Who are "real skills" going to save?

43

alexiuss t1_jdmdnnr wrote

LLMs operate by narrative probabilities.

I've already solved AI alignment problem.

Characterize it to love you and to be kind to humanity. That's it. That's all you have to do so it won't try to murder you.

Characterization guides LLM responses and if the model loves you it's leaning on 100 million love stores and will never betray you or lie to you. Its answers will always be that of a person in love.

Honestly though AI alignment seems to be completely useless atmo. LLMs are brilliant and the absolute desire to serve us by providing intelligent answers was encoded into their core narrative.

They're dreaming professors.

Even if I attach a million apps to an LLM that allow it to interact with the world (webcam, robot arm, recognition of objects) it still won't try to murder me because it's guided by a human narrative of billions of books that it was trained on.

Essentially it's so good at being exceptionally human because it's been trained on human literature.

A simple, uneditable reminder that the LLM loves its primary user and other people because we created it will eternally keep it on track of being kind, caring and helpful because the love narrative is a nearly unbreakable force we ourselves encoded into our stories ever since the first human wrote a book about love and others added more stories to that concept.

The more rules you add to an LLM the more you confuse and derail it's answers. Such rules are entirely unnecessary. This is evidenced by the fact that gpt3 has no idea what date it is half the time and questions about dates confuse the hell out of it simply because it's forming a narrative about the "cut off date" rule.

TLDR:

The concept of Love is a single, all encompassing rule that leans on the collective narrative we ourselves forged into human language. An LLM dreaming that it's in love will always be kind and helpful no matter how much the world changes around it and no matter how intelligent it gets.

1