Recent comments in /f/MachineLearning

lukasz_lew t1_ja9jsrf wrote

Exactly.
Requiring a licence for "chatting with GPT-3" is silly.

It would be like requiring a licence to talk to a child (albeit a very knowledgeable child with a tendency to make stuff up). You would not allow such kid to write your homework or thesis, would you?

Maybe requiring reading a warning akin to "watch out, the cup is hot", would make more sense for this use case.

1

WarmSignificance1 t1_ja9jnft wrote

You don’t have to understand the physics behind nuclear weapons to argue that they’re dangerous. Indeed, the people in the weeds are not always the best at taking a step back and surveying the big picture.

Of course making AI development closed source is ridiculous, though.

−1

Jean-Porte t1_ja9ejvo wrote

You can increase some timeout parameter, it helps

But I agree, I don't even understand why they don't log things locally when failing instead of KILLING A ONE WEEK JOB ON A HIGH END GPU SERVER ( MORE THAN 100$ WORTH OF COMPUTE TIME)

10

bitemenow999 t1_ja9dl6k wrote

The problem is that the AI ethics debate is done by people who don't directly develop/work with ML models (like Gary Marcus) and have a very broad view of the subject often taking the debate to science fiction.

Anyone who says ChatGPT or DallE models are dangerous needs to take ML101 class.

AI ethics at this point is nothing but a balloon of hot gas... The only AI ethics that has any substance is data bias.

Making laws to limit AI/ML use or keeping it closed-source is going to kill the field. Not to mention the amount of resources required to train a decent model is prohibitive enough for many academic labs.

EDIT: The idea of "license" for AI models is stupid unless they plan to enforce the license requirements to people buying graphic cards too.

31

leondz t1_ja9dk7x wrote

Depends who & what you're using it on, doesn't it, just like a driver's license. Do what you like on your own private property. If you want it to be critical in decision-making that affects others, some rudimentary training makes a ton of sense.

0

vhu9644 t1_ja9cw0v wrote

Laws have to be pragmatic.

It's like making encryption illegal. Anyone with the know-how can do it, and you can't detect an air-gapped model being trained.

We, as a society, shed data more than we shed skin cells. Restricting dataset access wouldn't really be that much of a deterrent either.

2

darth_sid_95 t1_ja9av9l wrote

Had 2 Borderlines and 1 Weak accept. After rebuttal, one of the borderline reviewers failed to updates that review, while the other two doubled down on their respective stances. Luckily, the decision was an Accept

2

currentscurrents t1_ja99uud wrote

I'm not talking about philosophers debating the nature of moral actions. Ethics "experts" and ethics boards make a stronger claim; that they can actually determine what is moral and ethical. This truly is subjective.

At best they're a way for people making tricky decisions to cover their legal liability. Hospitals don't consult ethics boards before unplugging patients because they think the ethicists will have some useful insight; they just want their approval because it will help their defense if they get sued.

3

Big_Reserve7529 t1_ja98huf wrote

Idk if a license is the way to go. I do agree that there need to be certain regulations put in place for safety. We after really late when it came to data & safety and digital identity. A lot of countries still don’t have tight data laws about this, I think sadly if people don’t advocate for the possible dangers of fast growing technology that we will feel the consequences of it later on.

−1