Recent comments in /f/singularity
Zer0D0wn83 t1_jadxb3r wrote
Reply to comment by brotherkaramasov in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Username checks out
No_Ninja3309_NoNoYes t1_jadx5r2 wrote
Reply to Researchers from UNSW Sydney created a soft robot that can 3D bio-print inside the human body. by Dalembert
This is great news as long as the printed organs are not rejected. If they can print microbrains, we would potentially not need GPUs in the future. And soldiers could have two hearts. I think in a decade or two, all organs will be printable.
OutOfBananaException t1_jadx5f1 wrote
Reply to comment by HiddenPalm in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
.. and after dumping on the US it still won't tell them 🤣🤣
RabidHexley t1_jadwxc5 wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
I'm not trying to actually define utopia. The word is just being used as shorthand for "generally very good outcome for most people". Which is possible even in a world of conflicting viewpoints, that's why society exists at all. Linguistic shorthand, not literal.
The actual definition of utopia in the literary sense is unattainable in the real world, yes. But our general wants and needs on a large scale aren't so divorced from each other that a positive outcome for humanity is inconceivable.
OutOfBananaException t1_jadwqrn wrote
Reply to comment by Olivebuddiesforlife in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
What kind of data do you mean? I don't believe they have a high quantity of quality domestic text training data, and they have stated they don't want to use worldwide data. It's not clear how they plan to resolve this.
Exarch_Maxwell t1_jadwm2n wrote
Reply to comment by Ok_Homework9290 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
A lot of people tend to forget middle grounds as well, the AI doesn't need to be as good or better than you to replace you, it just has to make the guys next to you productive enough so you are not necessary, adjust for scale and you could have 30% of the currently employed people be unemployed really soon, how many of those can re skill quick enough is another story.
Do give examples tho, cognitive labor that cannot be simplified into a series of smaller tasks which can then be automated.
Liberty2012 OP t1_jadwbcx wrote
Reply to comment by marvinthedog in Is the intelligence paradox resolvable? by Liberty2012
Humans have agency to change their own alignment which places themselves in contradictory and hypocritical positions.
Sometimes this is due to the nature of our understanding changes. We have no idea how the AI would perceive the world. We may give it an initial alignment of "be good to humans". What if it later comes to an understanding that directive is invalid because humans are either "bad" or irrelevant. Therefore a hard mechanism in place to ensure retained alignment.
Insciuspetra t1_jadw009 wrote
TooManyLangs t1_jadvyp7 wrote
Reply to comment by rya794 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
this
It's already faster to train an AI than a human for such tasks.
Humans are still faster learning easy things, like playing a new game, using a new tool, learning a new word, so...we are doomed?
Liberty2012 OP t1_jadvgev wrote
Reply to comment by JVM_ in Is the intelligence paradox resolvable? by Liberty2012
I tend to agree, but there are a lot of researchers moving forward in this endeavor. The question is why? Is there something the rest of us are missing in regards to successful containment?
When I read topics related to safety, the language tends to be abstract. "We hope to achieve ...".
It seems to me that everyone side steps the initial logical conflict that proponents are prosing a lower intelligence is going to "outsmart" a higher intelligence.
Liberty2012 OP t1_jadusvu wrote
Reply to comment by RabidHexley in Is the intelligence paradox resolvable? by Liberty2012
However, this is yet just another nuance of the aspect of defining all the things that should be within the domain of AI control immediately create conflicting views.
We are not even aligned ourselves. Not everyone will agree to the boundaries of your concept of what is a reasonable "utopia".
NotASuicidalRobot t1_jaduoxu wrote
Reply to comment by Ok_Homework9290 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
That is reasonable, however I think another significant factor is the massive improvement in job efficiency. For example, if 1 artist (just an easy example that I know of) can take on 5 times the work (including the human communications aspect since now the pure work crunching aspect is accelerated), unless demand somehow increases 5 times as well thats a few extra artists out of work
marvinthedog t1_jadujce wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
>What prevents it from changing that directive?
Its terminal goal (utility function), if it changes its terminal goal it wont achieve its terminal goal so that is a very bad strategy for the asi.
uswhole t1_jadu43n wrote
Reply to comment by techhouseliving in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
>Learn how to manage ai because it's still under our control for now. It's working for us, at least.
We can control Ai made in American or West but completely out of touch of the one develop in China, Russia ect. They might be 20 or maybe 40 years behind but I willing to bet they will try everything to surpass the ones in US.
Liberty2012 OP t1_jadu2il wrote
Reply to comment by challengethegods in Is the intelligence paradox resolvable? by Liberty2012
Yes, that is also a possibility. However, we would also assume the ASI has access to all human knowledge. If we did nothing, it would also know our nature and everything scenario we have ever thought about in losing control to AI.
It would be potentially both defensive and aggressive just with that historical knowledge.
Mino8907 t1_jadu1o4 wrote
Reply to comment by More_Inflation_4244 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Well the first two sentences I can get behind. But having an AI assistant would help most anyone be helpful. So no five years required to be an apprentice like position.
My understanding with how fast AI technologies are advancing is that why would anyone want to upskill if it cost money when AI would make that job less profitable or completely unnecessary as it would be performed by an almost free and speedy ai.
So like many I don't have the answer but I wouldn't spend money to up skill. Only my take.
Zer0D0wn83 t1_jadtywi wrote
Reply to comment by More_Inflation_4244 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Took a dark turn at the end (pointlessly) but the rest isn't far off the mark.
[deleted] t1_jadtwxq wrote
Reply to comment by techhouseliving in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
[deleted]
Liberty2012 OP t1_jadto73 wrote
Reply to comment by marvinthedog in Is the intelligence paradox resolvable? by Liberty2012
They are related concepts. Containment is the safety net so to speak. The insurance that alignment remains intact.
For example, high level concept given as a directive "be good to humans". What prevents it from changing that directive?
Ok_Homework9290 t1_jadtgrb wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
I've commented something similar in the past on this sub:
I get the impression that this sub seems to believe that white collar work is nothing more than just crunching numbers and shuffling papers, and therefore, it shouldn't be too hard to automate it in the near future.
Knowledge work (in general) is a lot more than that, and anyone who works in a knowledge-based field (or is familiar with a knowledge-based field) knows this. Not only do I think you're underestimating the complexity of cognitive labor, I also think you're (as impressive as AI progress has been the last few years) overestimating how fast AI progresses and also gets adopted.
AI that's capable of fully replacing what a significant amount of knowledge workers do is still pretty far out, in my humble opinion, given how much human interaction, task variety/diversity, abstract thinking, precision, etc. is involved in much of knowledge work (not to mention legal hurdles, adoption, etc). I strongly suspect a multitude of breakthroughs in AI are needed in order for it to cover the full breadth of any and every white-collar job, as merely scaling up current models to their limits will only fully automate some aspects of knowledge work and many will remain to be solved (again, that's my suspicion, I'm not 100% sure).
Will some of these jobs disappear over, let's say, the next 10 years? 100%. There's no point in even denying that, nor is there any point in denying that much of the rest of knowledge work will undoubtedly change over the next time span and even more so after that, but I'm pretty confident we're a ways away from it being totally disrupted by AI.
That's just what I think.
More_Inflation_4244 t1_jadt7yv wrote
Reply to comment by Mino8907 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
The offense closing statement aside, is OP actually wrong? Genuinely asking.
Liberty2012 OP t1_jadt7dr wrote
Reply to comment by phaedrux_pharo in Is the intelligence paradox resolvable? by Liberty2012
Yes, I think you nailed it here with this response. That aligns very closely with what I've called the Bias Paradox. Essentially humanity can not escape its own flaws through the creation of AI.
We will inevitably end up encoding our own flaws back into the system by one manner or another. It is like a feedback loop from which we can not escape.
I believe ultimately there is a very stark contrast of what visions people have of what "could be" versus the reality of what "will be".
I elaborate more thoroughly here FYI - https://dakara.substack.com/p/ai-the-bias-paradox
marvinthedog t1_jadt1wy wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
>There must be some boundary conditions for behaviors which it is not allowed to cross.
That is not what I have heard/remembered from reading about the alignment problem. I don´t see why a super intelligence that is properly aligned to our values would need any boundaries.
Iffykindofguy t1_jadt0at wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
If not today o'clock then within the next 5-10 years. It wouldnt take AGI to reach that.
marvinthedog t1_jadxbb1 wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
I don´t think we humans have terminal goals (by its true definition) and, in that case, that is what separates us from the asi.