Recent comments in /f/singularity

AsheyDS t1_ja0k2zd wrote

You're basically asking 'hey what if we designed a weapon of mass destruction that kills everybody?'. I mean, yeah.. what if? You're just assuming it will be trained on "all human historical data", you're assuming our fiction matters to it, you're assuming it has goals of its own, and you're assuming it will be manipulative. Yet you've offered no explanation as to why it would choose to manipulate or kill, or why it would have its own motives and why they would be to harm us.

4

nocturnalcombustion t1_ja0jdj2 wrote

Maybe hate speech is okay if it’s the people I don’t like. Heh jk, sort of.

To me, there are some meaningful, if not crisp, distinctions:

  • groups that are born that way vs groups where members control their membership.
  • groups where members can vs. can’t conceal their membership in the group.

Beyond that, I don’t like the idea of asymmetrical value judgments about when hate speech is okay. I could be missing some important distinctions though.

3

folk_glaciologist t1_ja0hnaq wrote

I went through a period of getting annoyed at people being unimpressed by ChatGPT but I've decided to just let it go. A few observations and theories of mine about why they are like this:

  • A lot of people are just phoning it in at work and pretty much hate their jobs. If you start hyping up how some AI chatbot is going to help them complete their TPS reports 10 times as fast you are going to come off as a weirdo corporate shill. Even if that happened, it would probably just mean their bosses start expecting 10 times as many TPS reports from them.
  • They tried it out but were really unimaginative with their prompts. One guy I showed it to I told him that he could use to write newsletters. His attempt at a prompt: "newsletter". Not "write a newsletter", not "write a news letter for the hiking club reminding members their fees are due 15/2/2023 and asking for suggestions for the next trip" or anything like that. They somehow think the AI is going to telepathically know what they want and if it doesn't then it's a dud.
  • They like to think they are too clever to fall for hype and hysteria and like to put on a cynical "too cool to be unimpressed by the latest shiny thing" front. One older guy at my office is convinced "it's just Eliza with a few extra bells and whistles".
  • They are low decouplers - people who can't separate the question of whether AI works from ethical questions around it. So they hear about Stable Diffusion using artists' work in their training sets without permission, hear that it's going to put people out of work, about OpenAI paying people in Kenya measly wages to train the bots etc etc and think that's all bad, so their natural response is to bad mouth AI technology by saying it doesn't work or is underwhelming. It's the equivalent of "eugenics is immoral, therefore eugenics doesn't work and is a pseudoscience"
  • People whose jobs are based around compliance concerns like privacy/security/plagiarism/copyright etc. They realise AI opens a massive can of worms for them and instead of working through the issues they are pretty keen to clamp down on it.
  • Cryptocurrency hype has made a lot of people wary about the "next big thing" in tech, especially when there is a cult-like vibe emanating from some of its evangelists, which is unfortunately how talk about singularity comes off like to a lot of people.
2

darkness3322 t1_ja0es7p wrote

You really can't see the potencial of this technology? We're literally talking about becoming something more, something that will raise questions about whether we can still call ourselves homosapiens or whether we should already apply a new name to our new evolutionary state...

2

BassoeG t1_ja0bdv4 wrote

>It's interesting that openai has somehow become the deciders of what is hateful or even moral.

It's even more 'interesting' how their decisions have no correlation to actual hate and morality but just match the status quo. In what possible universe is 'we can win and should therefore fight WW3' not the most hateful and amoral statement possible? It isn't censored and has a status quo propagandist megaphone.

2

LightVelox t1_ja0a5ew wrote

Well, the intrinsic motivation for most right-wing people i've meet were related mostly to taxes, freedom or being anti-state.

You mention fairness as one of the motivations for left-wing, but most right-wingers(that aren't far-right conservatives) are also searching for fairness, the thing is that THEIR fairness is not the same as left-wing's fairness.

Though you specifically mentioned "conservative" instead of "right-winger" so i can understand your point of view

1

Frumpagumpus t1_ja07k0y wrote

> Maybe making fun of gay people has a history that includes discrimination and abuse, even jail and murder? Maybe making fun of white people does not have the same history

depends on where you live... there are some african countries where discrimination and abuse of white people is defintely part of modern day history though it may not be politically correct to say it in the united states. an eye for an eye makes the whole world blind (which is kind of the implication of your humor ethics)

also while we are talking a fun fact: most capital investment goes into capital turn over, replacing stuff. So most wealth that exists today was created in the recent past and not as the result of slave labor or something (your ethics might not make as much sense as you think because entropy is a thing)

7