Recent comments in /f/technology

TechnoMagician t1_jcb0zpq wrote

It's just bullshit, you can trick the models to get around their filters. Maybe gpt-4 will be better against that, but it clearly means the model CAN make jokes about women, it just has been taught not to.

I guess there is a possible future where it is smart enough to solve large society wide problems but it just refuses to engage with them because it doesn't want to acknowledge the disparities in social-economic statuses between groups or something.

3

mrpenchant t1_jcb0z2h wrote

>If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.

So one thing I will note now, just because AI is blocked from giving you a sexist joke doesn't mean it couldn't have trained on them to be able to understand them.

>An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.

This feels like a very flimsy example. The AI is now employed as a detective rather than a chatbot, which is very much not the purpose of the ChatGPT but sure. Now ignoring like I said that the AI could be trained on sexist jokes and just refuse to make them, I still find it unlikely that understanding a sexist joke is going to be overly critical to solving a crime.

4

Strazdas1 t1_jcayi8q wrote

If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.

An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.

1

CountingDownTheDays- t1_jcay3vk wrote

I'm not saying it's bad just calling it out for what it is. For years I've heard people talk about eugenics, and now I see them calling it "CRISPR babies". Almost like their trying to rebrand it lol. If used for the right reasons eugenics can be a good thing. But I think we both know that rich people will end up getting the best of this technology and using it for themselves and their offspring, making the class divide even bigger. They will be able to pick and choose the traits they want like being more intelligent, physically stronger, etc. It's a very slippery slope.

2

neuronexmachina t1_jcax06f wrote

2017 transformers paper for reference: "Attention is all you need" (cited 68K times)

>The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data

4

almightygarlicdoggo t1_jcawsvl wrote

Because the entirety of Google doesn't work in LaMDA. It's likely that both companies assign a similar number of employees and similar funds to their respective AI. Also don't forget that OpenAI receives a huge amount of money from Microsoft. And in addition to that, Google announced LaMDA in 2021, when OpenAI had already years in development in language models.

13

ACCount82 t1_jcav54f wrote

It's a tough spot. GPT-4 is clearly no Skynet - but it's advanced enough to be an incredibly powerful tool, in the right hands. An incredibly dangerous tool, in the wrong hands.

Being able to generate incredibly realistic text that takes image and text context into account is a trust-destoying tech, if used wrong. Reviews, comments, messages? All those things we expect to be written by humans? They may no longer be. A single organization with an agenda to push can generate thousands of "convincing" users and manufacture whatever consensus it wants.

2

Rohit901 t1_jcarred wrote

Google has mostly been transparent most of the time and has published lot of groundbreaking AI research to the public thus advancing the field. OpenAI on the other hand seems to be closed source and trying to compete directly with Google. Maybe in future, Google might not be willing to make its research public if things go like this and we don’t want power to be concentrated in a single company or person. Thus I hope we are able to get better open source models

19

Edrikss t1_jcaqyt6 wrote

The AI still does the joke, it just never reaches your eyes. That's how a filter work. But it doesn't matter either way as the version you have access to is a final product; it doesnt learn based on what you ask it. The next version is trained in house by OpenAI and they choose what they teach it themselves.

6