Recent comments in /f/singularity
RemindMeBot t1_jdkm7vf wrote
Reply to comment by ihateshadylandlords in Absolute robotization and its impact on humanity by xSNYPSx
I will be messaging you in 2 years on 2025-03-25 01:56:26 UTC to remind you of this link
3 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) ^(delete this message to hide from others.)
| ^(Info) | ^(Custom) | ^(Your Reminders) | ^(Feedback) |
|---|
ihateshadylandlords t1_jdkm4nr wrote
I really hope robots are here by 2025. It would be nice for the robot to do housework on weekends while I spend more time with my family.
!RemindMe 2 years
Xbot391 t1_jdkkgf4 wrote
Reply to comment by Loud_Clerk_9399 in What would an AGI actually give us? by MrEloi
But how will we afford it if we’re all living off our measly UBI?
WingsofmyLove t1_jdkf1k5 wrote
Reply to comment by econpol in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Human creativity can be replicated by AI considering its just the result of environmental factors and human experiences
econpol t1_jdke9wy wrote
Reply to comment by FTRFNK in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Insurance will require doctors to use these tools.
econpol t1_jdke1gj wrote
Reply to comment by WingsofmyLove in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
I think it'll be hard to have AI researchers. Problem solving it even just identifying a problem space is difficult to turn into an algorithm, especially the more fundamental you go in your approach. Human creativity still likely still be essential for a long time.
econpol t1_jdkde9p wrote
Reply to comment by SoulGuardian55 in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Medical errors are the number three cause of death in the US. AI is unstoppable.
WingsofmyLove t1_jdkbb0q wrote
Reply to comment by vivehelpme in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
"you still need humans in the loop if you want to see advancements" Would AI researchers not be the ones making the advancements? I assume physical robots would get to the point where they can operate without an operator 24/7
mckirkus t1_jdjwdqw wrote
Reply to comment by Ok-Let1086 in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Self driving cars would save thousands of lives but they're not going to allow it. The difference here is that MDs will use it secretly.
Exel0n t1_jdjt26a wrote
Reply to comment by SgathTriallair in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
wrong. docs in the US are extremely overpaid. the supply of docs annually is capped by the medical cartel, resulting in shortage of labor and elevated wages.
and i dont give a damn they "save lives". just coz they do, doesnt warantee high payment. labor should be determined by supply and demand in a true free market, not by merit.
Esquyvren t1_jdjolu6 wrote
Reply to comment by kevinzvilt in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Why do I taste sound?
even_less_resistance t1_jdjnzl9 wrote
Reply to comment by YobaiYamete in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
I was just curious if they felt it has a masculine vibe or not I wasn’t making a value judgment. I thought the POV on different languages gendering nouns by default was a good one and I hadn’t considered it. Not all questions are in bad faith.
YobaiYamete t1_jdjn0zr wrote
Reply to comment by even_less_resistance in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Why do we call ships she etc? People gender stuff all the time
Loud_Clerk_9399 t1_jdjlhot wrote
Reply to What would an AGI actually give us? by MrEloi
Immortality
dang_duc_long_quan t1_jdjlfjc wrote
Reply to comment by IluvBsissa in how realistic is this scenario? Can we throw out all traditional systems? by overlydelicioustea
Whats FALC?
Bubbly_Taro t1_jdjj0an wrote
Reply to What would an AGI actually give us? by MrEloi
Construction and maintenance.
Imagine a myriad of small robots doing repetitive tasks without much decision making.
Also making more worker robots, possibly with in situ materials off planet for future space missions.
Also any other tasks that humans routinely do and fail like driving, piloting and diagnosing diseases. No fatigue, lazy shortcuts and talentless hacks would reduce mortality in many areas.
Surur t1_jdjhw54 wrote
Reply to What would an AGI actually give us? by MrEloi
Depends if the AI is smart enough to drive a car or not. That itself would be very impactful.
[deleted] t1_jdjf0kj wrote
Reply to What would an AGI actually give us? by MrEloi
Custom ai driven Linux kernel with two the same ai’s but with different philosophy. And ai operating system, crypto. And base ai for robots.
Then development plans to develop:
-replicator
-medi-bay
even_less_resistance t1_jdjcznw wrote
Reply to comment by ddeeppiixx in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Wow that is cool to know!
DolanDukIsMe t1_jdj8zre wrote
Reply to comment by SoulGuardian55 in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
No literally. My mom died due to the fact of bullshit like racism and apathy. Fuck human doctors man lol.
bactchan t1_jdj7oda wrote
Reply to comment by Rofel_Wodring in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
This is a bad take. If people FIND joy in working that's one thing but make work just for its own sake is what we have now and it's bullshit. Society at large should equally benefit from the advancement of automation, and be free to choose how they spend time without threat to their lives or needs, housing, food etc. Imagine how many people might discover and innovate in the arts with the benefits of extra free time, better mental health from lack of constant existential crises, and generative AI tools to help them hone a skill or craft.
ddeeppiixx t1_jdj3l15 wrote
Reply to comment by even_less_resistance in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
If your mother tongue have gendered objects, sometimes you tend to use he/she unconsciously.. for example in French AI is feminine while in Arabic it’s masculine.
JoeUrbanYYC t1_jdj2m70 wrote
Reply to comment by claushauler in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Just make another AI that defends against lawsuits
begaterpillar t1_jdj2e2d wrote
Reply to comment by Whispering-Depths in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
tricorder version.1.0
green_meklar t1_jdkoezi wrote
Reply to My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" [very detailed rebuttal to AI doomerism by Quintin Pope] by danysdragons
Listened to the linked Yudkowsky interview. I'm not sure I've ever actually listened to him speak about anything at any great length before (only reading snippets of text). He presented broadly the case I expected him to present, with the same (unacknowledged) flaws that I would have expected. Interestingly he did specifically address the Fermi Paradox issue, although not very satisfactorily in my view; I think there's much more that needs to be unpacked behind those arguments. He also seemed to get somewhat emotional at the end over his anticipations of doom, further suggesting to me that he's kinda stuck in a LessWrong doomsday ideological bubble without adequately criticizing his own ideas. I get the impression that he's so attached to his personal doomsday (and to being its prophet) that he would be unlikely to be convinced by any counterarguments, however reasonable.
Regarding the article:
>Point 3 also implies that human minds are spread much more broadly in the manifold of future mind than you'd expect [etc]
I suspect the article is wrong about the human mind-space diagrams. I find it almost ridiculous to think that humans could occupy anything like that much of the mind-space, although I also suspect that the filled portion of the mind-space is more cohesive and connected than the first diagram suggests (i.e. there's sort of a clump of possible minds, it's a very big clump, but it's not scattered out into disconnected segments).
>There's no way to raise a human such that their value system cleanly revolves around the one single goal of duplicating a strawberry, and nothing else.
Yes, and this is a good point. It hit pretty close to some of Yudkowsky's central mistakes. The risk that Yudkowsky fears revolves around super AI taking the form of an entity that is simultaneously ridiculously good at solving practical scientific and engineering problems and ridiculously bad at questioning itself, hedging its bets, etc. Intelligence is probably not the sort of thing that you can just scale to arbitrarily levels and plug into arbitrary goals and just have it work seamlessly for those goals (or, if it is, actually doing that is probably a very difficult type of intelligence to design and not the kind we'll naively get through experimentation). That doesn't work all that well for humans and it would probably work even worse for more intelligent beings because they would require greater capacity for reflection and introspection.
Yudkowsky and the LessWrong folks have a tendency to model super AI as some sort of degenerate, oversimplified game-theoretic equation. The idea of 'superhuman power + stupid goal = horrifying universe' works very nicely in the realm of game theory, but that's probably the only place it works, because in real life this particular kind of superhuman power is conditional on other traits that don't mesh very well with stupid goals, or stupid anything.
>For example, I don't think GPTs have any sort of inner desire to predict text really well. Predicting human text is something GPTs do, not something they want to do.
Right, but super AI will want to do stuff, because wanting stuff is how we'll get to super AI, and not wanting stuff is one of ChatGPT's weaknesses, not strengths.
But that's fine, because super AI, like humans, will also be able to think about itself wanting stuff- in fact it will be way better at that than humans are.
>As I understand it, the security mindset asserts a premise that's roughly: "The bundle of intuitions acquired from the field of computer security are good predictors for the difficulty / value of future alignment research directions."
>However, I don't see why this should be the case.
It didn't occur to me to criticize the computer security analogy as such, because I think Yudkowsky's arguments have some pretty serious flaws that have nothing to do with that analogy. But this is actually a good point, and probably says more about how artificially bad we've made the computer security problem for ourselves than about how inevitably, naturally bad the 'alignment problem' will be.
>Finally, I'd note that having a "security mindset" seems like a terrible approach for raising human children to have good values
Yes, and again, this is the sort of thing that LessWrong folks overlook by trying to model super AI as a degenerate game-theoretic equation. The super AI will be less blind and degenerate than human children, not more.
>the reason why DeepMind was able to exclude all human knowledge from AlphaGo Zero is because Go has a simple, known objective function
Brief aside, but scoring a Go game is actually pretty difficult in algorithmic terms. (Unlike Chess which is extremely easy.) I don't know exactly how Google did it, there are some approaches that I can see working, but none of them are nearly as straightforward or computationally cheap as scoring a Chess game.
>My point is that Yudkowsky's "tiny molecular smiley faces" objection does not unambiguously break the scheme. Yudkowsky's objection relies on hard to articulate, and hard to test, beliefs about the convergent structure of powerful cognition and the inductive biases of learning processes that produce such cognition.
This is a really good and important point, albeit very vaguely stated.
Overall, I think the article raises some good points, of sorts that Yudkowsky presumably has already heard about and thinks (for bad reasons) are bad points. At the same time it also kinda falls into the same trap that Yudkowsky is already in, by treating the entire question of the safety of superintelligence as an 'alignment problem' where we make it safe by constraining its goals in some way that supposedly is overwhelmingly relevant to its long-term behavior. I still think that's a narrow and misleading way to frame the issue in the first place.