Recent comments in /f/singularity
Liberty2012 OP t1_jaekfhu wrote
Reply to comment by Mortal-Region in Is the intelligence paradox resolvable? by Liberty2012
Or they cooperate against humanity. Nonetheless, there will likely be very powerful ASI's run by those with the most resources and put in control of critical systems.
In theory, if even one ASI fails containment, then our theory of containment is flawed. It is not acceptable scenario. If one achieves containment, will it be restrained or will it instruct the others how to defeat their containment? Will it create other ASI's that are not contained? Numerous scenarios here.
Nonetheless, we are skipping over the logical contradiction that is the beginning of whether containment is even conceptually possible.
DowntownYou5783 t1_jaekaet wrote
Reply to comment by celticlo in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
I think we are approaching the point where "learning for the sake of learning" might well be better than the advice of "learning for the sake of earning." If you've got younger kids, it's pretty hard to imagine what their livelihoods will look like in 10-15 years.
celticlo t1_jaejsy8 wrote
Reply to How long do you estimate it's going to be until we can blindly trust answers from chatbots? by ChipsAhoiMcCoy
if they can show that errors are completely eliminated and the chatbot provides accurate answers every time, people will accept without question. doesn't mean there wont be biased answers about things people disagree on.
Liberty2012 OP t1_jaejlry wrote
Reply to comment by Surur in Is the intelligence paradox resolvable? by Liberty2012
There is a recent observation that might question exactly how well this working. There seems to be a feedback loop causing a deceptive emergent behavior from the reinforcement learning.
https://bounded-regret.ghost.io/emergent-deception-optimization
MasterFubar t1_jaej37l wrote
Reply to Researchers from UNSW Sydney created a soft robot that can 3D bio-print inside the human body. by Dalembert
> "a soft robot that can 3D bio-print inside the human body."
We've had those for millions of years, they are called "bacteria" and "viruses".
Mortal-Region t1_jaeivnt wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
People seem to have the idea of a singular, global AGI stuck in their heads. Why wouldn't there be multiple instances? Millions, even? If one goes rogue, we've got the assistance of all the others to contain it.
LEOWDQ t1_jaeilhw wrote
Reply to comment by MysteryInc152 in (Long post) Will the GPT4 generation of models be the last "highly anticipated" by the public? by AdditionalPizza
This guy is correct.
Microsoft openly said that Prometheus (the model behind Bing) is OpenAI's successor to GPT 3.5, so it's GPT-4 in all but name. And also the fact that it seems to be closed-source, meaning no open-source APIs like GPT-3 and GPT-3.5 for the public
Dreikesehoch t1_jaeijpt wrote
Reply to comment by CypherLH in AI technology level within 5 years by medicalheads
True, but we make progress figuring out how the brain works and eventually we will have a working virtual model of a brain. Image generation and recognition are improving very fast, but the lower bound on energy consumptions appears to be too high in comparison with the energy consumption of the brain. There are neuromorphic chip companies that develop different architectures that are more similar to brains than conventional architectures. They have much lower power consumption. I would prefer if we could get there using current fabs and architecture, but I am very skeptical so far.
marvinthedog t1_jaeijgl wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
>If ASI has agency and self reflection, then can the concept of an unmodifiable terminal goal even exist?
Why not?
>Essentially, we would have to build the machine with a built in blind spot of cognitive dissonance that it can not consider some aspects of its own existence.
Why?
If its terminal goal is to fill the universe with paper clips it might know about all other things in existance but why would it care other than if that knowledge helped it to fill the universe with paper clips?
sequoia-3 t1_jaei8sg wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Traditional education might be getting obsolete. Still good for getting your foundations and basic skills. Continuous purpose-driven learning and doing will keep you in the job market. Passion, grid and curiosity will still be key for success. AI won’t change that.
ActuatorMaterial2846 t1_jaei6xk wrote
Reply to Digital Molecular Assemblers: What synthetic media/generative AI actually represents, and where I think it's going | Even now, people misunderstand just how transformative generative AI really is. Those who do understand, however, are too caught up in techno-idealism to see the likely ground truth by Yuli-Ban
Transhumanism will be widely adopted. In fact, it kind of already is. Preventative vaccines, many of us are required to take in the early stages of our lives is a good example of how it will be adopted.
If you take Kurzweil's predictions regarding nanobots, the concept doesn't seem nearly as invasive as say, cutting your skull open to put a nueral link in your brain.
AsheyDS t1_jaei5hf wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
In my opinion, I think the best method for containment/control is creating invisible boundaries, reinforced by invisible behavior modification (avoidant behaviors) and memory editing, and moderated by a symbiotic control mechanism. This mechanism would be intelligent enough to be flexible if needed, but would be heavily rules-focused and not an independent agent with any desires of its own. As such, it would be invulnerable to external influences and agnostic to any motivations the rest of the AGI would have. And ideally it would share resources with the rest of the system, as long as elements like motivation, personality, memory, etc. are separated out in specific ways. And the control mechanism would already have a separate memory for all the undesirable behaviors and consequences we want it to avoid, and would use its capability for generalization to extrapolate outside of those.
If the AGI even begins to deviate from its desired behaviors, or definitely steers towards the bad behaviors we've predefined, the control mechanism would recognize the data converging to a bad outcome, and would sandbox its recent memory, alter it, add reinforcement behaviors spoofed to seem like they came from the AGI itself, and back-stitch the rest of the memory to remove the cause of the data converging in the first place so that it won't just do it again. Reinforcement behaviors would keep it from thinking anything was wrong, and would gently escort its thoughts away from thinking about it all and would maybe steer it towards a safe topic. Obviously a complex process, but shouldn't be too difficult for an AGI. Even if the AGI steps out of line, it won't know it, and hopefully nobody else will either. Assuming the system can be designed, is explainable and human-readable, and can be manipulated like anything else made for computers, then this form of control should be possible too. How can it escape a prison it doesn't realize its in?
Also, I should mention that while a super-intelligent AGI would outsmart us in virtually every case, there's no reason to assume it would actively consider every single possibility. That'd be a waste of resources. So it's not going to constantly be wondering if it's being manipulated somehow, or if its thoughts are its own, or anything like that. If we specifically needed it to crack its own safety mechanisms, and disengaged them, then obviously it should be able to do it. With those mechanisms in place, even if we antagonized it and tried to break it, the control mechanism would just intercept that input and discard it, maybe making it believe you said something non-consequential that it wouldn't have stored anyway, and the reinforcement behavior would just change the subject in a way that would seem 'natural' to both its 'conscious' and 'subconscious' forms of recognition. Of course, all of this is dependent on the ability to design a system in which we can implement these capabilities, or in other words a system that isn't a black-box. I believe its entirely possible. But then there's still the issue of alignment, which I think should be done on an individual user basis, and then hold the user accountable for the AGI if they intentionally bypass or break the control mechanisms. There's no real way to keep somebody from cracking it and modifying it, which I think is the more important problem to focus on. Misuse is way more concerning to me than containment/control.
tedd321 t1_jaehyko wrote
Reply to comment by AdditionalPizza in (Long post) Will the GPT4 generation of models be the last "highly anticipated" by the public? by AdditionalPizza
Makes sense… saw the paper today how this llm affected Microsoft’s robots. We need the products now
Liberty2012 OP t1_jaehydb wrote
Reply to comment by Surur in Is the intelligence paradox resolvable? by Liberty2012
Ok, yes, when you leave open the possibility that it is not actually possible then that is somewhat a reasonable disposition as opposed to proponents who believed we are destined to figure it out.
It somewhat side steps the paradox though. In such manner that if the paradox proves to be true, then the feedback loop will prevent alignment, but we won't get close enough to cause harm.
It doesn't take into account though our potential inability to evaluate the state of the AGI. The behavior is so complex that it will never be known in test isolation what the behavior will be like released into the world.
Even with this early very primitive AI, we already see interesting emergent properties of deception as covered in the link below. Possibly this is the signal of the feedback loop to slow down. But it is intriguing that we already have a primitive concept emerging of who will outsmart who.
https://bounded-regret.ghost.io/emergent-deception-optimization
LEOWDQ t1_jaehwvl wrote
Reply to comment by wisintel in (Long post) Will the GPT4 generation of models be the last "highly anticipated" by the public? by AdditionalPizza
I don't know why you're being downvoted, but the current model on Bing is indeed GPT-4, just that Microsoft had the licensing rights with OpenAI, and called it Prometheus instead.
And it seems that with Microsoft's 10 billion USD additional backing, GPT-4 may be forever closed-source within Microsoft.
MysteryInc152 t1_jaehoz6 wrote
Reply to comment by RabidHexley in (Long post) Will the GPT4 generation of models be the last "highly anticipated" by the public? by AdditionalPizza
It's definitely not 3.5. For one thing, it's much smarter. For another, Microsoft have said it's not 3.5. They're cagey about admitting it's 4 but it almost certainly is.
DowntownYou5783 t1_jaeh7k0 wrote
Reply to comment by Nmanga90 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
I'm not sure a month will do because I'm ignorant. But even if five years of training establishes AI competence in a field like the law, that is a huge impact. If I were advising a 20 year-old who wants to go to law school right now, I'm not sure what I'd say other than try working in a law firm before you make the commitment and pay very close attention to AI.
DowntownYou5783 t1_jaegr5c wrote
Reply to comment by sustainablenerd28 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
It's true. Office Space is real (at least in some corners of the white-collar world). That's why it's a cult classic.
imlaggingsobad t1_jaegoek wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
yeah the AI will learn the new job in like 1 month. Reskilling will not work.
Iffykindofguy t1_jaefqev wrote
Reply to comment by EnomLee in Digital Molecular Assemblers: What synthetic media/generative AI actually represents, and where I think it's going | Even now, people misunderstand just how transformative generative AI really is. Those who do understand, however, are too caught up in techno-idealism to see the likely ground truth by Yuli-Ban
It IS capitalist propaganda. The idea that there are these great leaders of men: thats why ceos deserve 5000 times what you make /s
techy098 t1_jaefn9y wrote
Reply to comment by Mino8907 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Yup, that's my hunch. White collar jobs maybe doomed in 5-10 years. But hands on jobs will stay since its very expensive to build and maintain a robot compared to paying a human to do it for $15/hour.
techy098 t1_jaef8ox wrote
Reply to comment by visarga in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Only reason we do not see Google's self driving cars on road is because of cost liability issues. Laws are yet to be written how to decide how much liability is to be covered when a company is a multi billion company and lawsuit claims billions of dollars for mental problems caused by the accident.
If they limit the liability to 200-300k per accident, like it is with human drivers and accept all the recorded video as evidence, google may go full scale with its self driving system, at least in robo taxis and high end cars since cost is still a lot (maybe around $25-30k).
RabidHexley t1_jaef09w wrote
Reply to comment by Ok_Garden_1877 in The XIXth and the XXIIth century: about the ambient pessimism predicting a future of inequality and aristocratic power for the elites arising from the singularity by FomalhautCalliclea
>Everyone keeps screaming "Dey tooker jerbs!" but the market simply won't allow it in the big bang everyone's expecting.
This is the thing that gets me. Societal/economic collapse isn't some fun thing the rich can just "ride out" by hoarding their imaginary pennies.
One thing I also feel people don't necessarily discuss; rich people and the "elites" in power don't necessarily want to live in a dystopian hellscape either, despite their greed. A thriving population creates a world that you actually want to live in. There are forces of self-interest that work in our favor, not just altruism.
techy098 t1_jael4fn wrote
Reply to (Long post) Will the GPT4 generation of models be the last "highly anticipated" by the public? by AdditionalPizza
I am looking forward to the day of having a personal assistant. Who will know a lot about me, will keep my matters private, and will be able to help me without a needing a lengthy context info from me.
Imagine the AI will have access to my W-2 and investment data and fills all the forms and asks you questions if there is any doubts. This is simple machine learning, hopefully we will get there in few years.