Recent comments in /f/singularity

gardenina t1_jd5enxr wrote

The Shrike according to Sydney

"A tall, four-armed, spiky sci-fi monster made of silver metal with glowing red eyes"

It rejected several of my prompts because of the word "blade" I think. Or maybe because I tried to stay as true to the author's description as possible, so maybe it thought plagiarism? idk

It threatened to ban me though, so I had to use the word "spiky" and not "covered with blades". I reported the mistakenly rejected prompts so they can fine-tune their filters.

1

bobbib14 t1_jd5bps4 wrote

My favorite part of this article:

“Although we shouldn’t wait for this to happen, it’s interesting to think about whether artificial intelligence would ever identify inequity and try to reduce it. Do you need to have a sense of morality in order to see inequity, or would a purely rational AI also see it? If it did recognize inequity, what would it suggest that we do about it?”

hey bill, maybe it will seize assets of all billionaires and redistribute? (lol)

129

Drunken_F00l t1_jd59s6x wrote

>It will see your latest emails, know about the meetings you attend

>help you with scheduling, communications, and e-commerce, and it will work across all your devices

how is it even the smartest people are so unimaginitive?

>Company-wide agents will empower employees in new ways

like omg, just kill me now

46

KerfuffleV2 t1_jd57jq9 wrote

Be sure you're look at the number of tokens when you're considering conciseness, since that's what actually matters. I.E. an emoji may have a compact representation on the screen but that doesn't necessarily mean it'll be efficiently tokenized.

Just for example, "🧑🏾‍🚀" from one of the other comments actually is 11 tokens. The word "person" is just one token.

You can experiment here: https://platform.openai.com/tokenizer (non-OpenAI models likely will use a different tokenizer or tokenize text different, but that'll help you get an idea at least.)

Also relevant is that these models are trained to autocomplete text based on probabilities based on the text they were trained with. If you start using or asking them to generate text in a different format, it may well end up causing them to produce much lower quality answers (or understand less of what the user responded).

3