Recent comments in /f/singularity

Liberty2012 OP t1_jae3380 wrote

The closest would be our genetic encoding of behaviors or possibly other limits of our biology. However we attempt to transcend those limits as well with technological augmentation.

If ASI has agency and self reflection, then can the concept of an unmodifiable terminal goal even exist?

Essentially, we would have to build the machine with a built in blind spot of cognitive dissonance that it can not consider some aspects of its own existence.

1

RabidHexley t1_jae2c7j wrote

>The AI will have to take the abstract and resolve to something concrete. Either we tell it how to do that or we leave that decision up to the AI which brings us back to the whole concept of AI safety. How much agency does the AI have and what will happen.

This is only the case in a hard (or close to hard) take-off scenario where AI is trying to figure out how to form the world into an egalitarian society from the ground up given the current state.

It's possible that we achieve advanced AI, but global change happens much slower. Trending towards effective pseudo-post-scarcity via highly efficient renewable energy production and automated food production.

Individual (already highly socialized) nation-states start instituting policies that trend those societies towards egalitarian structures. These social policies start getting exported throughout the western and eventually eastern worlds. Generations pass and social unrest in totalitarian and developing nations leads to technological adoption and similar policies and social structures forming.

Socialized societal structures and use of automation increases over time which causes economic conflict to trend towards zero. Long long-term (entering into centuries) certain national boundaries begin to dissolve as the reason for those structures existence begins to be forgotten.

I'm not advocating this as a likely outcome. Just as a hypothetical, barely-reasonable scenario for how the current world can trend towards an egalitarian, post-scarcity society over a long time-span via technological progress and AI without the need for AGI to take over the world and restructure everything. Just to illustrate how there are any number of ways history can play out besides AGI takes over and either fixes or destroys the world.

2

visarga t1_jae1edb wrote

> you cannot keep pace with AI

We are not competing with AI. We are competing with other people who use AI. Everyone has and will have AI. Using AI won't give you a comparative advantage in 2030.

Companies that want to scale AI need people. AI really shines when it is supported. You need people around them to maximise their value.

If you want to get rid of your human employees and use only AI, your competition will eat your lunch. They will team up AI with humans and be faster and more creative than you. Competition won't allow companies to simply get rid of people.

All this extra creativity and work enabled by AI will be eaten by our expanding desires and entitlement. In 2030 the expectations of the public will be sky high compared to now, companies will have to provide better products to keep up.

6

-zero-below- t1_jae1ah3 wrote

A big reason that AI threatens humans in terms of labor is how taxation is done right now -- AI is a capital expense, and can reduce tax costs. Human labor is very expensive, partially because of paying humans, but also partially because of a huge burden of payroll taxes.

I'd suspect that AI replacement of human labor would be delayed significantly by simply removing payroll+income taxes, and instead taxing capital investments and/or corporate profits instead.

Right now, any improvement that AI and machine labor provides is heavily subsidized by the population -- from the company's perspective, it's an artificially cheap source of production.

6

phriot t1_jae1348 wrote

I'm already a white collar worker with a PhD. While I am always learning, if I have to receive a new credential to prove additional competencies, I doubt I have more than one go around left before that's entirely unfeasible. This is partly a funding thing, and partly a time thing. Having degrees already, I'm pretty sure that I'm ineligible for federal student loans for another Bachelor's, and getting into a second PhD program with assistantships would be difficult, if not impossible. That leaves me likely self-funding a Master's Degree. Doing this more than once would wipe my finances out beyond the point where there would ever be a payoff.

I think a better route is for people to self-learn as much as they can about where their fields are heading, and the tools that are on the horizon. I believe that it will likely be easier to try to evolve with your current degree than banking on trying to repeatedly get new ones. Try to be the last "you" standing, as it were. This could involve getting certificates, certifications, or even new degrees if you can get them paid for, but I see this as extending skills, rather than replacing them. What I can't see is saying "Okay, my accounting job is likely to get automated, so I'll get Master's in Computer Science. Okay, I probably won't be senior enough to survive chatbots coming for entry level coding positions, so in 3 years I'll go get an Associate's in robot repair. Okay, now robots are all modular and self-repairing, so it's back to another Master's in Space Civil Engineering." You'll just never be actually working long enough to make any progress other than always having an entry level job.

6

Raychao t1_jae103p wrote

I think what is happening is there is about to be an explosion of AI generated text and imagery flooding all the consulting, blog writing and marketing, design and sales jobs..

Sales and Consulting firms churn and recycle the same wordage over and over again in their PowerPoint decks.. This is already largely cut+paste+tweakage.. AI can do that..

Imagery, you give it a phrase or string and it can generate all this cool looking imagery.. This is like 90% of marketing and graphic design jobs.. The rest is tying it together into the brand.. AI can do that..

The thing is what is the point in AI writing content for AI to consume? The humans sure as shit won't be reading it all.. We meatbags are lazy, we'll just get the AI to read what the AI produces..

There will be an absolute mountain of content produced but no one will be reading it..

30

Liberty2012 OP t1_jadzsar wrote

> But our general wants and needs on a large scale aren't so divorced from each other that a positive outcome for humanity is inconceivable.

In the abstract, yes; however, even slight misalignment is where all of societies conflicts arise. We have civil unrest and global war despite in the abstract we are all aligned.

The AI will have to take the abstract and resolve to something concrete. Either we tell it how to do that or we leave that decision up to the AI which brings us back to the whole concept of AI safety. How much agency does the AI have and what will happen.

0

EnomLee t1_jadzhqg wrote

Yuli-Ban actually puts effort into what they want to say, which easily makes them worth more than a thousand post ChatGPT users here, who only have their shower thoughts and Terminator gifs to offer.

11

onyxengine t1_jadzeid wrote

We kinda are, if the industry experts in the field you want to join are collaborating with machine learning engineers to build an AI that streamlines their workflows and knows what they know. You’re not going to become an industry expert before that AI becomes a tool that replaces the industry experts.

13

visarga t1_jadzefz wrote

There's a long way from "impressive demo" to "replacing humans". Self driving cars could impress us in demos even 10 years ago, but they can't be on their own, not even now.

If you work in ML you tend to know the failure modes and issues much better than the public. So you have to be less optimistic. Machine learning works only when the problem is close to the training data. It doesn't generalise well, you have to get good data if you want good results.

3

rya794 t1_jadzdj8 wrote

The question isn’t just about whether or not you could get a job in 3-4 years, it’s about whether or not the investment makes sense. Unless, you plan to be employed in a field for >7ish years, then the answer is almost certainly no.

Are you confident you can identify a field that will still require your labor in 10-11 years?

3

RabidHexley t1_jadyhsb wrote

>Once you optimize hard enough for any utility curve you get either complete utopia or complete dystopia the vast majority of times.

Yeah, if we assume that the future will guaranteed trend towards optimizing a utility curve. That isn't necessarily how the development and use of AI will actually play out. You're picking out data points that are actually only a subset of a much larger distribution.

1

OutOfBananaException t1_jadycev wrote

They have pretty well stated they can't scrape English data, as it has too much Western bias for their liking. They may be able to filter it, but as we've seen with ChatGPT, it's not straightforward, and things will fall through the cracks. That makes life difficult for censors.

In domains where they have access to large volumes of data that doesn't need heavy curating (outside of text), they should be able to do fine.

2