Recent comments in /f/singularity
uberschweigen t1_jadsvw8 wrote
Reply to comment by Nmanga90 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
I think the notion that substance misuse is not rampant in white collar jobs is probably misplaced.
techhouseliving t1_jadsovw wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
We're there its just not evenly distributed yet. Next generation in a few months is professor intelligence at super computer speed with the information of the entire Internet and the ability to create anything any artist or technician can, in seconds. In parallel, even. Self improving it's code.
Learn how to manage ai because it's still under our control for now. It's working for us, at least.
But we're super close to the singularity if you ask me and I work in this space. The speed is dizzying.
RabidHexley t1_jadsn49 wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
Utopia in this context doesn't mean "literary" utopia. But the idea of a world where we've solved most or all of the largest existential problems causing struggle and suffering upon humanity as a whole (energy scarcity, climate catastrophe, resource distribution, slave labor, etc.) . Not all possible individual struggle.
That doesn't mean we've created a literal perfect world for everyone. But an "effective" utopia.
DragonForg t1_jadsgpo wrote
Reply to "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
Why do anything if it will go away because you have to work/do something you dont want to do.
Every day when I have free time, I get mad because I know I will have to go back to work in a few hours. And I know being in grad school it will never change. 5 years of this, with only a small amount of breaks. Plus the added stress of needing to be better than everyone just to make it by when everyone else is smarter than you to begin with.
Its why I am so interested in AI because it means I dont need to be smart, I can just know and understand it from the get go. I can just relax and live my life but also still have the passion for chemistry (my focus) making new discovers without working or feeling obligated to work ~60 hours.
brotherkaramasov t1_jadsdss wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
The real answer is that we need to engage in political movements that support UBI and other means of wealth distribution. Until then, we will slowly cannibalize each other until many people become almost homeless while others have to work 16h a day to afford a basic lifestyle.
Professional-Noise80 t1_jads9wm wrote
Reply to "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
Interesting ! I've been thinking about this a lot lately, you make a lot of sense
JVM_ t1_jads1v0 wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
I don't think we can out-think the singularity. Just like a single human can't out-spin a ceiling fan, the singularity will be fast enough to be beyond humans containment attempts.
What happens next though? I guess we can try to build 'friendly' AI's that tend toward not ending society, but I don't think true containment can happen.
Nmanga90 t1_jadrtdy wrote
Reply to comment by just-a-dreamer- in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Depends if people are investing money to make an AI related to that field but yeah that is the case
PunkRockDude t1_jadrpj3 wrote
Reply to comment by rya794 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
And that was before AI comes into the picture. Even slower are companies ability to support change even when the employees are ready.
phaedrux_pharo t1_jadrn73 wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
>By what mechanism do we think that will be achievable?
By "correctly" setting up the basic incentives, and/or integration with biological human substrates. Some ambiguity is unavoidable, some risk is unavoidable. One way to approach the issue is from the opposite direction:
What do we not do? Well, let's not create systems whose goals are to deliberately extinguish life on earth. Let's not create torture bots, let's not create systems that are "obviously" misaligned.
Unfortunately I'm afraid we've already done so. It's a tough problem.
The only solution I'm completely on board with is everyone ceding total control to my particular set of ethics and allowing me to become a singular bio-ASI god-king, but that seems unlikely.
Ultimately I doubt the alarms being raised by alignment folks are going to have much effect. Entities with a monopoly on violence are existentially committed to those monopolies, and I suspect they will be the ones to instantiate some of the first ASIs - with obvious goals in mind. So the question of alignment is kind of a red herring to me, since purposefully un-aligned systems will probably be developed first anyway.
Frosty_Success_4528 t1_jadrkd5 wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Guess it’s time to crack a beer and get a bag then pal
Liberty2012 OP t1_jadrb81 wrote
Reply to comment by 3_Thumbs_Up in Is the intelligence paradox resolvable? by Liberty2012
I don't think utopia is a possible outcome. It is a paradox itself. Essentially all utopias become someone else's dystopia.
The only conceivable utopia is one designed just for you. Placed into your own virtual utopia designed for you own interests. However, even this is paradoxically both a utopia and a prison as in welcome to the Matrix.
TFenrir t1_jadrb3u wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
I think if we can get a really good, probably sparsely activated, multimodal model that can do continual learning that shows transfer - ala Pathways, many white collar jobs are done.
Any system that has continual learning I think would also have short/medium/whatever term memory, and a context window that can handle enough at once that rivals what we can handle at any given time.
But the thing is I think that unlike biological systems, there are many different inefficient ways to get us there as well. A very dense model that is big enough, with a better fine tuning process might be all we need. Or maybe the bottle neck is currently really context, as in-context learning is quite powerful, what if we suddenly have an efficiency breakthrough with a Transformer 2.0 that can allow for context windows of 1 million tokens?
Also maybe we don't need multimodal per se, maybe a system that is trained on pixels would cover all bases.
just-a-dreamer- OP t1_jadqqto wrote
Reply to comment by Nmanga90 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
In that case, you cannot keep pace with AI as a white collar worker that is displaced.
If you need new education to fill a different professional job position, chances are AI will be developed faster than you can upskill to get to that level.
3_Thumbs_Up t1_jadq63e wrote
Reply to comment by RabidHexley in Is the intelligence paradox resolvable? by Liberty2012
There is an infinite multitude of ways history might play out, but they're not all equally probable.
The thing about the singularity is that its probability distribution of possible futures is much more polarized than humans are used to. Once you optimize hard enough for any utility curve you get either complete utopia or complete dystopia the vast majority of times. It doesn't mean other futures aren't in the probability distribution.
Nmanga90 t1_jadq4eh wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Right now is that time. Time and cost are on a sliding scale with ML. The more money you commit, the faster you can train AI. As it is, an AI can be finetuned on basically the entirety of the worlds knowledge of a specific subject in a month with (relatively) significant monetary investment
Liberty2012 OP t1_jadq1t1 wrote
Reply to comment by marvinthedog in Is the intelligence paradox resolvable? by Liberty2012
Well, cage is simply metaphorical context. There must be some boundary conditions for behaviors which it is not allowed to cross.
Edit: I explain alignment in further detail in the original article. Mods removed it from original post, but hopefully it is ok to link in a comment. It was a bit much to put all in a post, but there was a lot of thought exploration on the topic.
https://dakara.substack.com/p/ai-singularity-the-hubris-trap
marvinthedog t1_jadplr2 wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
I don´t think the strategy is to cage it but to align it correctly with our values, which probably is extremely, extremely, extremely difficult.
[deleted] t1_jadp3lj wrote
Reply to comment by Mino8907 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
[deleted]
Remarkable-Okra6554 t1_jadojh8 wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Few years ago
Mino8907 t1_jado8oj wrote
Reply to comment by [deleted] in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Wow, what an ignorant response. Got it.
RabidHexley t1_jado7as wrote
Reply to comment by phaedrux_pharo in Is the intelligence paradox resolvable? by Liberty2012
Many seem to. There has been a serious rise of ASI Utopia vs ASI Damnation dichotomy rhetoric of late (with the obvious lean of stoking fear towards the damnation side of the spectrum). Like there aren't an infinite multitude of ways history might play out.
[deleted] t1_jadnzbp wrote
Reply to comment by Mino8907 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
[deleted]
Liberty2012 OP t1_jadnx3q wrote
Reply to comment by phaedrux_pharo in Is the intelligence paradox resolvable? by Liberty2012
Certainly there is a spectrum of behavior for which we would deem allowable or not allowable. However, that in itself is an ambiguous set of rules or heuristics for which there is no clear boundary and presents the risk of leaks of control due to not well defined limits.
However, for whatever behavior we set within the unallowable, that must be protected such that it can not be self modified by the AGI. By what mechanism do we think that will be achievable?
challengethegods t1_jadszzw wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
inb4 trying to cage/limit/stifle/restrict the ASI is the exact reason it becomes adversarial