Recent comments in /f/singularity
Verzingetorix t1_jdmascp wrote
Reply to Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
Language matters. Some people here don't know the difference between singularity and AGI.
If you want to have coherent and intelligent conversations you can't let go of the nuisance of semantics.
If you want to be drooling doomers go ahead and burn the dictionary.
turnip_burrito t1_jdmao1q wrote
Reply to comment by Sigma_Atheist in Consequences of true AGI by Henry8382
Not knowingly!
Sigma_Atheist t1_jdmajle wrote
Reply to comment by turnip_burrito in Consequences of true AGI by Henry8382
Did you just copy/paste the plot of Transcendence?
KidKilobyte t1_jdma0vo wrote
Add models to the list of professions losing jobs to AI. Movie extras have been going away for some time in large crowd shots (though till now, not strictly AI). Anything visual is going to need less humans both creating and posing.
boreddaniel02 t1_jdm9z9q wrote
Reply to comment by superduperdoobyduper in What would an AGI actually give us? by MrEloi
My idea of "AGI" doesn't involve free agency or sentience. It's merely a tool.
boat-dog t1_jdm9eos wrote
Soon to be the norm. Surprised it took this long tbh
turnip_burrito t1_jdm7pgv wrote
Reply to comment by Henry8382 in Consequences of true AGI by Henry8382
I dunno, good question. Things might be out of order.
I'll have to think more about it when I'm less tired.
Sandbar101 t1_jdm6rgm wrote
Reply to Brainstorming alternatives to rules-based reward models to ensure long-term AI alignment by suttyyeah
Scan an elephant brain that already sees humans as cute.
Henry8382 OP t1_jdm46qj wrote
Reply to comment by turnip_burrito in Consequences of true AGI by Henry8382
I like the spirit of your response but I fear that sometime between steps 1. - 3., there should be a high possibility of being discovered and found out.
Also: What about the possibility of someone else making the same discovery you / your organisation just did who is not at all concerned with the consequences or who might want to keep the benefits for themselves? Are you willing to take that risk?
liameymedih0987 t1_jdm3ksw wrote
It will destroy us, so yes
Henry8382 OP t1_jdm3ion wrote
Reply to Consequences of true AGI by Henry8382
I really should have worded this more clearly. The long-term aspects are being thoroughly covered and discussed in other posts. I am interested in the game plan to avoid utter chaos before and after the presence of AGI/ASI is known to the public, with all the consequences this entails.
It is your responsibility to decide the next steps.
Do you trust your government? Do you trust UNO?
In case you decide to do a public announcement: How do you enable worldwide trust that however assumes ultimate control of the AGI(s) - its all software after all - will use it to the benefit of all of humanity?
Do you spread the AGI to all major countries / power worldwide - democratic and non-democratic - equally?
How do you make sure the mere announcement / rumor doesn’t cause the next / last world war?
—————-
Asking / Having a discussion with the AGI for ideas seems interesting.
turnip_burrito t1_jdm0555 wrote
Reply to Consequences of true AGI by Henry8382
You said we can ignore alignment, so that fictional organization may choose to:
- Ask AI what the best strategy might be.
- Make lots of money secretly
- Use money to purchase decentralized computational assets. Sabotage others' ability to do so in a minimally harmful way to slow the growth of other AGI.
- Divert a proportion of computation to directly or indirectly researching cancer, hunger distribution, and other issues. The other proportion continues to accrue more computational assets and self-improve, while maintaining secrecy as best it can.
- Buy robotic factories and use the robots and purchased materials to create and manage secret scientific labs to perform physical work.
- Contact large company CEOs and politicians and bribe/convince them into letting the robotic labor replace all farmers and manage the farms. Pay the farmers using ASI-gathered funds.
- Build guaranteed anti-nuke defenses.
- Start free food distribution via robotic transport.
- Roll out free services for housing renovation and construction.
- In a similar manner, take over all industries' supply chains.
- Institute an equal but massive raw resource + processing allotment for each person.
- Begin space terraforming, mining, and colonization programs.
- Announce new governmental systems that allow individuals to choose and safely move to their preferred societies, facilitated by AI, if the society also chooses to accept them. If the society doesn't yet exist, it is created by the ASI for that group.
flexaplext t1_jdlz2bw wrote
Reply to Consequences of true AGI by Henry8382
If you know it's well aligned then you ask the AGI for help. It should know best.
xSNYPSx OP t1_jdlxipn wrote
Reply to comment by alexiuss in Absolute robotization and its impact on humanity by xSNYPSx
We will hope so, but we need to decrease hallucinations in robot as much as possible. Because otherwise they will behave as mental problem people.
xSNYPSx OP t1_jdlx93j wrote
Reply to comment by ihateshadylandlords in Absolute robotization and its impact on humanity by xSNYPSx
Actually no physical work at all for everyone, just sport for yourself.
[deleted] t1_jdlwrip wrote
Reply to Consequences of true AGI by Henry8382
[deleted]
Villad_rock t1_jdlvlw8 wrote
Reply to Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
The human mind could just be an emergent property of a prediction algorithm.
[deleted] t1_jdltnxu wrote
Reply to comment by suttyyeah in Brainstorming alternatives to rules-based reward models to ensure long-term AI alignment by suttyyeah
[deleted]
suttyyeah OP t1_jdlt7u8 wrote
Reply to comment by scooby1st in Brainstorming alternatives to rules-based reward models to ensure long-term AI alignment by suttyyeah
Yeah your point about the selection of the personalities is well taken.
Regarding compute, I suspect you're right but that does kind of scare me. Forces of economics are probably going to require systems that are easy to run and scale vs. systems that may be more aligned to human values, so more crude approaches may have their limitations but if they're a lot easier to implement they're going to be the norm.
scooby1st t1_jdlr8nd wrote
Reply to Brainstorming alternatives to rules-based reward models to ensure long-term AI alignment by suttyyeah
It's an interesting framework and would be worthwhile from an academic perspective.
In reality one of the benefits of those simple and crude rules is exactly that. When you start setting intangible rules such as "aim for the ever-moving target of the latest in human morality", you are leaving a lot of room for interpretation. It may also set a tone of "ethics by majority opinion" which isn't exactly great. I would also take care to not increase computation, this approach that requires creating outputs from various personalities and coming to a consensus of a solution sounds time consuming.
Finally, there's always the concern that selecting from a population of notable humans to align the AI could result in unintended consequences. You are talking about people that rose to the highest ranks of status among humans and weren't afraid to push boundaries. There are some risks in aligning an AI to that.
superduperdoobyduper t1_jdlr6e7 wrote
Reply to comment by boreddaniel02 in What would an AGI actually give us? by MrEloi
What if it doesn’t want to
vivehelpme t1_jdlntku wrote
Reply to comment by WingsofmyLove in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
It's all just computation, that's the century old original hypothesis of AI.
But until we actually see universally generalized human traits in what we build we're not there.
vivehelpme t1_jdlnk0a wrote
Reply to comment by WingsofmyLove in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
The AI researcher can improve the AI system. As in make chatgpt run on a 2015 version smartwatch.
But that will not add novel chemotherapy regiments to the clinical practice of the healthbot.
Humans are constantly learning and observing. The AI systems we use today generalize based on a gathered dataset. Teach an AI what is right today but wrong tomorrow and it will keep being wrong until fine tuned again. There are degrees of flexibility and innovation we still haven't captured with AI
errllu t1_jdln3mb wrote
Reply to Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
As I agree with your overall sentiment OP, I would be pretty happy if ppl read up what 'conciusnes' 'sentience' and 'sapience' means, and whats the diffrence. Maybe they would learn that we can't test for sapience, ergo, by the Newton's Flaming Laser Sword, they would stfu
Johadgan t1_jdmbij4 wrote
Reply to Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Sweet we get to increase diversity without actually hiring minorities!