Recent comments in /f/MachineLearning
PacmanIncarnate t1_jaafjl5 wrote
Reply to comment by VirtualHat in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Don’t regulate tools, regulate their product and the oversight of them in decision making. Don’t let any person, institution or corporation use AI as an excuse for why they committed a crime or unethical behavior. The law should take it as an a priori that a human was responsible for decisions, regardless of whether or not an organization actually functioned that way, because the danger of AI is that it’s left to make decisions and those decisions cause harm.
[deleted] OP t1_jaaaot2 wrote
OpeningVariable t1_jaa8zp8 wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
BingChat is generating information, not retrieving it, and I'm quite sure that we will see lawsuits as soon as this feature becomes public and some teenager commits suicide over BS that it spat out or something like that.
Re the tool part - yes, exactly, and we should understand what that tool is good for, or more specifically - what it is NOT good for. No one writes airplanes' mission critical software using python, they use formally verifiable languages and algorithms because that is the right tool for the amount of risk involved. AI is being thrown around for anything, but it isn't a good tool for everything. Depending on the amount of risk and exposure for each application, there should be different regulations and requirements.
​
>Most of the startups are off shoots of academic labs.
This was a really bad joke. First of all, why would anyone care about off-shoots of academic labs? They are no longer academics, they are in the business, and can fend for themselves. Second of all, there is no way most startups are offshoots of academic labs, most startups are looking for easy money and throw in AI just to sound cooler and bring more investors.
Wmichael t1_jaa6wuf wrote
Reply to comment by VirtualHat in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
I mean it probably has
bitemenow999 t1_jaa5b9n wrote
Reply to comment by OpeningVariable in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
what are you saying mate, you can't sue google or Microsoft because it gave you the wrong information... all software services come with limited/no warranty...
As for tesla, there is FMVSS and other regulatory authorities that already take care of it... AI ethics is BS, a buzzword for people to make themselves feel important...
AI/ML is a software tool, just like python or C++... do you want to regulate python too on the off chance someone might hack you or commit some crime?
​
>This is not about academic labs, but about industry, governments, and startups.
Most of the startups are off shoots of academic labs.
VirtualHat t1_jaa4ueu wrote
Reply to comment by po-handz in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
A better analogy would be: This professor thinks the implementation of driver's licences has reduced traffic accidents.
VirtualHat t1_jaa4jwx wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
An increasing number of academics are identifying significant potential risks associated with future developments in AI. Because regulatory frameworks take time to develop, it is prudent to start considering them now.
While it is currently evident that AI systems do not pose an existential threat, this does not necessarily apply to future systems. It is important to remember that regulations are commonly put in place and rarely result in the suppression of an entire field. For instance, despite the existence of traffic regulations, we continue to use cars.
andreichiffa t1_jaa3v5s wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Based on some of the comments over on /r/ChatGPT asking to remove the disclaimers while they teach themselves plumbing, HVAC and electric works with ChatGPT, we are a couple of lawsuits from OpenAI and MS actually creating a GPT certification and workplaces requiring it to interact with LLMs/insurances refusing claims resulting from ChatGPT interaction without certification.
OpeningVariable t1_jaa3ldd wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
This is not about academic labs, but about industry, governments, and startups. It is one thing that Microsoft doesn't mind rolling out a half-assed BingChat that can end up telling you ANYTHING at all - but should they be allowed to? What about Tesla? Should they be allowed to launch and call "autopilot" an unreliable piece of software that they know cannot be trusted and that they do not fully understand. I think not
enryu42 t1_jaa1lru wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
> The only AI ethics that has any substance is data bias
While the take in the tweet is ridiculous (but alas common among the "AI Ethics" people), I'd disagree with your statement.
There are many other concerns besides the bias in the static data. E.g. feedback loops induced by ML models when they're deployed in real-life systems. One can argue that causality for decision-making models also falls into this category. But ironically, the field itself is too biased to do productive research in these directions...
ElPelana OP t1_jaa1g5i wrote
Reply to comment by Numerous_Tune_8320 in [D] CVPR Rebuttal scores are out! by ElPelana
Wow that's a nice rebuttal!!
[deleted] OP t1_ja9z6ye wrote
YodaML t1_ja9ykvh wrote
Reply to comment by Impressive-Smile5659 in [N] New 1.0 release of Deep Graph Library (DGL) by jermainewang
Interesting as I have not had much trouble reproducing the results from papers I use as baselines. I find that sometimes, weight initialisation can make a difference so read the paper carefully on how they initialised the convolutional layer weights and check that DGL is using the same method. If not, do a custom initialisation based on the paper.
OpeningVariable t1_ja9xo2h wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
I think requiring an audit of models and data before the model can be used commercially is not such a bad thing. E.g. audit of ChatGPT and granting permission for specific kinds of commercial use - once we figure out what those are, and what tools we can use for auditing the models.
_poisonedrationality t1_ja9xdek wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
I hardly ever see AI ethicists say anything useful. I feel like they're motivated by making hot takes than contributing a helpful perspective.
daidoji70 t1_ja9xchz wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
If the Internet has taught me anything, its that for whatever ridiculous 100% dumbest take you can imagine, you can def find a credentialed professional who holds that opinion. Its often unclear whether they hold that opinion for attention or notoriety or just for character defects.
admirelurk t1_ja9wy95 wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
I counter that many developers of ML have a too narrow definition of what constitutes danger. Sure, chatGPT will not go rogue and start killing people, but the technology affects society in much more subtle ways that are hard to predict.
Numerous_Tune_8320 t1_ja9wt5p wrote
Reply to [D] CVPR Rebuttal scores are out! by ElPelana
First 3 3 2 (B B WR) -> after rebuttal 4 4 3 (WA WA B)
Finally my paper got ACCEPTED!
yaosio t1_ja9rvvw wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
It's only considered dangerous because individuals can do what companies and governments have done for a long time. What took teams of people to create plausible lies can now be done by one person. When somebody says AI is dangerous all I hear is they want to keep the power to lie in the hands of the powerful.
bitemenow999 t1_ja9p3n9 wrote
Reply to comment by WarmSignificance1 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
that is a very bad argument.. I would suggest you read up on the quote from Oppenheimer after the first nuclear test, whereas, the people surveying the "big picture" decided to bomb Hiroshima...
[deleted] OP t1_ja9mpny wrote
bluebolt789 t1_ja9mbyc wrote
Reply to comment by External_Juice_8140 in [Discussion] Can you use a model trained on tweets/product reviews to do sentiment analysis on IT support tickets? by [deleted]
Yeah I am not looking for a definitive answer, because as you said the only way to know for sure is to try and evaluate the performance.
I’m just trying to gauge whether it’s a “yeah very unlikely to work” or “seems promising, try it”. I have read an interesting paper that suggest filtering the sentences with a domain dictionary created from the training set before passing it to a pre-trained model. These kind of ideas is what I am looking for!
Unfortunately manually labeling the ticket data to get a benchmark is not something I can do, or that of course would be my first test too.
walk-the-rock t1_ja9m7dr wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
> requirement of a license to use AI like chatGPT since it's "potentially dangerous"
guess we need a license to use sophisticated technology like Python, C++, Java, shell scripts, Excel... anything that executes code and makes machines do stuff.
You could implement the math for a resnet in an excel spreadsheet (I'm not recommending this).
redflexer t1_ja9locb wrote
Reply to comment by currentscurrents in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
This is not at all how ethic boards operate. They very rarely make decisions themselves but define the parameters within which an ethical decision can be made (e.g. what aspects need to be considered and weighted against each other, who needs to be heard, etc.). If you had other experiences, this is not representative for the majority of boards.
PacmanIncarnate t1_jaaghjo wrote
Reply to comment by yaosio in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Exactly. Any ethicist worried about how joe will use AI is missing the big picture that real ethical violations are going to come from governments and corporations.