Recent comments in /f/MachineLearning

VirtualHat t1_j9vkpgd wrote

That's a good question. To be clear, I believe there is a risk of an extinction-level event, just that it's unlikely. My thinking goes like this.

  1. Extinction-level events must be rare, as one has not occurred in a very long time.
  2. Therefore the 'base' risk is very low, and I need evidence to convince me otherwise.
  3. I'm yet to see strong evidence that AI will lead to an extinction-level event.

I think the most likely outcome is that there will be serious negative implications of AI (along with some great ones) but that they will be recoverable.

I also think some people overestimate how 'super' a superintelligence can be and how unstoppable an advanced AI would be. In a game like chess or Go, a superior player can win 100% of the time. But in a game with chance and imperfect information, a relatively weak player can occasionally beat a much stronger player. The world we live in is one of chance and imperfect information, which limits any agent's control over the outcomes. This makes EYs 'AI didn't stop at human-level for Go' analogy less relevant.

1

filipposML t1_j9vc6dw wrote

Indeed, the generative model produces data points, and the discriminative one classifies them together with the real data. I think that for your purposes it is easier to refer to your algorithm as "adversarial in nature". You are using games where the algorithms are expected to reach a Nash equilibrium, but also there is no gradient (presumably) from one agent to another.

1

mosquitoLad OP t1_j9vazbs wrote

My loose understanding of GANs is that one agent creates assets i.e. images and audio, while another agent attempts to differentiate assets based on if they were or weren't created by an agent. The results create automatically labeled data that can be used in subsequent training cycles, optimally leading to higher quality asset output.

I'm mixed about the IPM label. Predictability Minimization seems okay by itself; Inverse seems tacked on. Maybe something like Counter Predictability Exploitation?

1

memberjan6 t1_j9v3ay9 wrote

There was a time i was meeting with a new dev and "she" was the focus of his explanations, which were pretty long winded. I didn't get a chance to interrupt his monologue. I was spending too many cycles trying to go back in his words, while he was speaking, to try to determine who she is.

Years later it occurred to me he was being FANCY by calling his code "she" the whole time. I didn't pick up anything meaningful from his text consequently.

It pays to speak plainly.

5

Lyscanthrope t1_j9uz4bb wrote

Simple answer: yes, of course! Middle ground: of you gave any hyper parameters to choose, you need a validation set! More detailed answer: it is very probable depending on the assumption that you have on your data. Choosing how to do the model selection will lead to how you estimate the model performance (ie the way you estimate the generalisation error)... Lot of work can go in here! Edit: this is my humble opinion but one should always think on how to validate performances before modeling... It saves a lot of time. And please, always know you basic (statistic wise)

9

bohreffect t1_j9uy9ko wrote

>What about when ChatGPT

I mean, we're facing even more important dilemas right now, with ChatGPT's saftey rails. What is it allowed to talk about, or not? What truths are verbotten?

If the plurality of Internet content is written by these sorts of algorithms, that have hardcoded "safety" layers, then dream of truly open access to information that was the Internet will be that much closer to death.

0

Imnimo t1_j9ux0jn wrote

Well, I don't really think this is a semantic disagreement. I'm using their definition of the term.

If the issue is the danger of an AI arms race, what does a poorly-trained model have to do with it? Isn't the danger supposed to be that the model will be too strong, not too weak?

1

icedrift t1_j9uwkrx wrote

I agree with all of this but it's already been done. Social media platforms already use engagement driven algorithms that instrumentally arrive at recommending reactive content.

Cambridge analytica also famously preyed on user demographics to feed carefully tailored propaganda to swing states in the 2016 election.

3