Recent comments in /f/MachineLearning
WarmSignificance1 t1_ja9jnft wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
You don’t have to understand the physics behind nuclear weapons to argue that they’re dangerous. Indeed, the people in the weeds are not always the best at taking a step back and surveying the big picture.
Of course making AI development closed source is ridiculous, though.
quisatz_haderah t1_ja9j8xr wrote
Reply to comment by currentscurrents in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
I think you should add this to your original response. Because this should be heard more.
quisatz_haderah t1_ja9j2ib wrote
Reply to comment by vhu9644 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
>It's like making encryption illegal.
Yet they are pushing this agenda. They have no clue how Internets work.
Jean-Porte t1_ja9iik5 wrote
Reply to comment by not_particulary in [D] More stable alternative to wandb? by not_particulary
>Yeah but it's super iffy. My exact script works most of the time, so idk even what to fix. That's why I just want to use something else, the software is obviously not stabl
Do `export WANDB__SERVICE_WAIT=300`
I don't have that problem anymore
not_particulary OP t1_ja9g6o1 wrote
Reply to comment by Jean-Porte in [D] More stable alternative to wandb? by not_particulary
Yeah but it's super iffy. My exact script works most of the time, so idk even what to fix. That's why I just want to use something else, the software is obviously not stable.
MW1369 t1_ja9f29c wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Preach my man preach
Jean-Porte t1_ja9ejvo wrote
Reply to [D] More stable alternative to wandb? by not_particulary
You can increase some timeout parameter, it helps
But I agree, I don't even understand why they don't log things locally when failing instead of KILLING A ONE WEEK JOB ON A HIGH END GPU SERVER ( MORE THAN 100$ WORTH OF COMPUTE TIME)
bitemenow999 t1_ja9dl6k wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
The problem is that the AI ethics debate is done by people who don't directly develop/work with ML models (like Gary Marcus) and have a very broad view of the subject often taking the debate to science fiction.
Anyone who says ChatGPT or DallE models are dangerous needs to take ML101 class.
AI ethics at this point is nothing but a balloon of hot gas... The only AI ethics that has any substance is data bias.
Making laws to limit AI/ML use or keeping it closed-source is going to kill the field. Not to mention the amount of resources required to train a decent model is prohibitive enough for many academic labs.
EDIT: The idea of "license" for AI models is stupid unless they plan to enforce the license requirements to people buying graphic cards too.
leondz t1_ja9dk7x wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Depends who & what you're using it on, doesn't it, just like a driver's license. Do what you like on your own private property. If you want it to be critical in decision-making that affects others, some rudimentary training makes a ton of sense.
[deleted] OP t1_ja9dfun wrote
vhu9644 t1_ja9cw0v wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Laws have to be pragmatic.
It's like making encryption illegal. Anyone with the know-how can do it, and you can't detect an air-gapped model being trained.
We, as a society, shed data more than we shed skin cells. Restricting dataset access wouldn't really be that much of a deterrent either.
[deleted] OP t1_ja9b5bp wrote
badjezus t1_ja9b1qs wrote
Reply to comment by bballerkt7 in [D] CVPR Rebuttal scores are out! by ElPelana
No
darth_sid_95 t1_ja9av9l wrote
Reply to [D] CVPR Rebuttal scores are out! by ElPelana
Had 2 Borderlines and 1 Weak accept. After rebuttal, one of the borderline reviewers failed to updates that review, while the other two doubled down on their respective stances. Luckily, the decision was an Accept
ton4eg t1_ja9aokt wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
After spending some time exploring AI ethics, it seems rather useless. However, ethics is a real problem, but the discipline failed to provide any meaningful answers.
canbooo t1_ja9a5et wrote
Reply to [D] CVPR Rebuttal scores are out! by ElPelana
I voted did not change because I wanted to see the results without biasing them too much. Do what you want with this info.
currentscurrents t1_ja99uud wrote
Reply to comment by redflexer in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
I'm not talking about philosophers debating the nature of moral actions. Ethics "experts" and ethics boards make a stronger claim; that they can actually determine what is moral and ethical. This truly is subjective.
At best they're a way for people making tricky decisions to cover their legal liability. Hospitals don't consult ethics boards before unplugging patients because they think the ethicists will have some useful insight; they just want their approval because it will help their defense if they get sued.
[deleted] OP t1_ja999zr wrote
[deleted] OP t1_ja99723 wrote
not_particulary OP t1_ja9962l wrote
Reply to comment by [deleted] in [D] More stable alternative to wandb? by not_particulary
lol
Big_Reserve7529 t1_ja98huf wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Idk if a license is the way to go. I do agree that there need to be certain regulations put in place for safety. We after really late when it came to data & safety and digital identity. A lot of countries still don’t have tight data laws about this, I think sadly if people don’t advocate for the possible dangers of fast growing technology that we will feel the consequences of it later on.
redflexer t1_ja97eug wrote
Reply to comment by currentscurrents in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
This specific take is naive, but ethics is a very rigorous discipline and is also different from moral codes, which are subjective.
Ramdogger t1_ja97dxi wrote
Reply to comment by JiraSuxx2 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Use, AI powered, software of course to determine legitimacy of ID. /s
[deleted] t1_ja977ee wrote
Reply to [D] More stable alternative to wandb? by not_particulary
[deleted]
lukasz_lew t1_ja9jsrf wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Exactly.
Requiring a licence for "chatting with GPT-3" is silly.
It would be like requiring a licence to talk to a child (albeit a very knowledgeable child with a tendency to make stuff up). You would not allow such kid to write your homework or thesis, would you?
Maybe requiring reading a warning akin to "watch out, the cup is hot", would make more sense for this use case.