Recent comments in /f/singularity

Henry8382 OP t1_jdm46qj wrote

I like the spirit of your response but I fear that sometime between steps 1. - 3., there should be a high possibility of being discovered and found out.

Also: What about the possibility of someone else making the same discovery you / your organisation just did who is not at all concerned with the consequences or who might want to keep the benefits for themselves? Are you willing to take that risk?

0

Henry8382 OP t1_jdm3ion wrote

I really should have worded this more clearly. The long-term aspects are being thoroughly covered and discussed in other posts. I am interested in the game plan to avoid utter chaos before and after the presence of AGI/ASI is known to the public, with all the consequences this entails.

It is your responsibility to decide the next steps.

Do you trust your government? Do you trust UNO?

In case you decide to do a public announcement: How do you enable worldwide trust that however assumes ultimate control of the AGI(s) - its all software after all - will use it to the benefit of all of humanity?

Do you spread the AGI to all major countries / power worldwide - democratic and non-democratic - equally?

How do you make sure the mere announcement / rumor doesn’t cause the next / last world war?

—————-

Asking / Having a discussion with the AGI for ideas seems interesting.

0

turnip_burrito t1_jdm0555 wrote

You said we can ignore alignment, so that fictional organization may choose to:

  1. Ask AI what the best strategy might be.
  2. Make lots of money secretly
  3. Use money to purchase decentralized computational assets. Sabotage others' ability to do so in a minimally harmful way to slow the growth of other AGI.
  4. Divert a proportion of computation to directly or indirectly researching cancer, hunger distribution, and other issues. The other proportion continues to accrue more computational assets and self-improve, while maintaining secrecy as best it can.
  5. Buy robotic factories and use the robots and purchased materials to create and manage secret scientific labs to perform physical work.
  6. Contact large company CEOs and politicians and bribe/convince them into letting the robotic labor replace all farmers and manage the farms. Pay the farmers using ASI-gathered funds.
  7. Build guaranteed anti-nuke defenses.
  8. Start free food distribution via robotic transport.
  9. Roll out free services for housing renovation and construction.
  10. In a similar manner, take over all industries' supply chains.
  11. Institute an equal but massive raw resource + processing allotment for each person.
  12. Begin space terraforming, mining, and colonization programs.
  13. Announce new governmental systems that allow individuals to choose and safely move to their preferred societies, facilitated by AI, if the society also chooses to accept them. If the society doesn't yet exist, it is created by the ASI for that group.
4

suttyyeah OP t1_jdlt7u8 wrote

Yeah your point about the selection of the personalities is well taken.

Regarding compute, I suspect you're right but that does kind of scare me. Forces of economics are probably going to require systems that are easy to run and scale vs. systems that may be more aligned to human values, so more crude approaches may have their limitations but if they're a lot easier to implement they're going to be the norm.

0

scooby1st t1_jdlr8nd wrote

It's an interesting framework and would be worthwhile from an academic perspective.

In reality one of the benefits of those simple and crude rules is exactly that. When you start setting intangible rules such as "aim for the ever-moving target of the latest in human morality", you are leaving a lot of room for interpretation. It may also set a tone of "ethics by majority opinion" which isn't exactly great. I would also take care to not increase computation, this approach that requires creating outputs from various personalities and coming to a consensus of a solution sounds time consuming.

Finally, there's always the concern that selecting from a population of notable humans to align the AI could result in unintended consequences. You are talking about people that rose to the highest ranks of status among humans and weren't afraid to push boundaries. There are some risks in aligning an AI to that.

5

vivehelpme t1_jdlnk0a wrote

The AI researcher can improve the AI system. As in make chatgpt run on a 2015 version smartwatch.

But that will not add novel chemotherapy regiments to the clinical practice of the healthbot.

Humans are constantly learning and observing. The AI systems we use today generalize based on a gathered dataset. Teach an AI what is right today but wrong tomorrow and it will keep being wrong until fine tuned again. There are degrees of flexibility and innovation we still haven't captured with AI

1