Recent comments in /f/MachineLearning

Ronny_Jotten t1_j6z9axn wrote

> The models don’t come with buttons that do anything. They are tools capable only of what the software developers permit to enter the models and what users request.

If you prompt an AI with "Mickey Mouse" - no more effort than clicking a button - you'll get an image of Mickey Mouse that violates intellectual property laws. The image, or the instructions for producing it, is contained inside the model, because many copyrighted images were digitally copied into the training system by the organization that created the model. It's just not remotely the same thing as someone using the paintbrush tool in Photoshop to draw a picture of Mickey Mouse themselves.

> If we go down the road of regulating training and capacity to do x, you’ll have to file lawsuits against every artist on behalf of every copyright holder over the IP inside the artist’s head.

I don't think you have a grasp of copyright law. That is a tired and debunked argument. Humans are allowed to look at things, and remember them. Humans are not allowed to make copies of things using a machine - including loading digital copies into a computer to train an AI model - unless it's covered by a fair use exemption. Humans are not the same as machines, in the law, or in reality.

> These cases are going to fall apart

I don't think they will. Especially for the image-generating AIs, it's going to be difficult to prove fair use in the training, if the output is used to compete economically with artists or image owners like Getty, whose works have been scanned in, and affect the market for that work. That's one of the four major requirements for fair use.

1

Competitive-Rub-1958 t1_j6z8a7t wrote

For someone who simply wants to use ANE (haven't bought it, just considering) for testing out bare-bones models locally (I find remotely debugging quite frustrating) for research purposes before finally training them on cloud, how good is the support with Containerization solutions like Singularity - does it even leverage ANE?

I know the speedup won't really be anything drastic, but if it helps (is faster and more resource efficient than the CPU/GPU) then that just translates to a lower time-to-iterate anyways...

So for someone using plain PyTorch (w/ a bells and whistles), how much of a pain would it be?

1

Mefaso t1_j6z6zgt wrote

>DALL-E 2 also applies diffusion in latent space

Not really in the important part. Dalle2 uses diffusion in clip-"latent"-space and then conditions the pixel-diffusion model on the result.

However they still do a full diffusion pass in pixel-space, which is more complex than doing it in latent space, as LDMs do.

1

ProSmokerPlayer t1_j6yz9wl wrote

If you want to create a winning poker bot you need these few things.

OCR software to recognise stack sizes, position on table, cards, antes, blinds etc, all the variables.

Then it needs to translate this into something legible so that this spot can now be looked up in a GTO database.

DB gives the answer, voila you have solved poker.

1