Recent comments in /f/MachineLearning

PacmanIncarnate t1_jaafjl5 wrote

Don’t regulate tools, regulate their product and the oversight of them in decision making. Don’t let any person, institution or corporation use AI as an excuse for why they committed a crime or unethical behavior. The law should take it as an a priori that a human was responsible for decisions, regardless of whether or not an organization actually functioned that way, because the danger of AI is that it’s left to make decisions and those decisions cause harm.

1

OpeningVariable t1_jaa8zp8 wrote

BingChat is generating information, not retrieving it, and I'm quite sure that we will see lawsuits as soon as this feature becomes public and some teenager commits suicide over BS that it spat out or something like that.

Re the tool part - yes, exactly, and we should understand what that tool is good for, or more specifically - what it is NOT good for. No one writes airplanes' mission critical software using python, they use formally verifiable languages and algorithms because that is the right tool for the amount of risk involved. AI is being thrown around for anything, but it isn't a good tool for everything. Depending on the amount of risk and exposure for each application, there should be different regulations and requirements.

​

>Most of the startups are off shoots of academic labs.

This was a really bad joke. First of all, why would anyone care about off-shoots of academic labs? They are no longer academics, they are in the business, and can fend for themselves. Second of all, there is no way most startups are offshoots of academic labs, most startups are looking for easy money and throw in AI just to sound cooler and bring more investors.

0

bitemenow999 t1_jaa5b9n wrote

what are you saying mate, you can't sue google or Microsoft because it gave you the wrong information... all software services come with limited/no warranty...

As for tesla, there is FMVSS and other regulatory authorities that already take care of it... AI ethics is BS, a buzzword for people to make themselves feel important...

AI/ML is a software tool, just like python or C++... do you want to regulate python too on the off chance someone might hack you or commit some crime?

​

>This is not about academic labs, but about industry, governments, and startups.

Most of the startups are off shoots of academic labs.

0

VirtualHat t1_jaa4jwx wrote

An increasing number of academics are identifying significant potential risks associated with future developments in AI. Because regulatory frameworks take time to develop, it is prudent to start considering them now.

While it is currently evident that AI systems do not pose an existential threat, this does not necessarily apply to future systems. It is important to remember that regulations are commonly put in place and rarely result in the suppression of an entire field. For instance, despite the existence of traffic regulations, we continue to use cars.

3

andreichiffa t1_jaa3v5s wrote

Based on some of the comments over on /r/ChatGPT asking to remove the disclaimers while they teach themselves plumbing, HVAC and electric works with ChatGPT, we are a couple of lawsuits from OpenAI and MS actually creating a GPT certification and workplaces requiring it to interact with LLMs/insurances refusing claims resulting from ChatGPT interaction without certification.

1

OpeningVariable t1_jaa3ldd wrote

This is not about academic labs, but about industry, governments, and startups. It is one thing that Microsoft doesn't mind rolling out a half-assed BingChat that can end up telling you ANYTHING at all - but should they be allowed to? What about Tesla? Should they be allowed to launch and call "autopilot" an unreliable piece of software that they know cannot be trusted and that they do not fully understand. I think not

3

enryu42 t1_jaa1lru wrote

> The only AI ethics that has any substance is data bias

While the take in the tweet is ridiculous (but alas common among the "AI Ethics" people), I'd disagree with your statement.

There are many other concerns besides the bias in the static data. E.g. feedback loops induced by ML models when they're deployed in real-life systems. One can argue that causality for decision-making models also falls into this category. But ironically, the field itself is too biased to do productive research in these directions...

1

YodaML t1_ja9ykvh wrote

Interesting as I have not had much trouble reproducing the results from papers I use as baselines. I find that sometimes, weight initialisation can make a difference so read the paper carefully on how they initialised the convolutional layer weights and check that DGL is using the same method. If not, do a custom initialisation based on the paper.

1

OpeningVariable t1_ja9xo2h wrote

I think requiring an audit of models and data before the model can be used commercially is not such a bad thing. E.g. audit of ChatGPT and granting permission for specific kinds of commercial use - once we figure out what those are, and what tools we can use for auditing the models.

0

daidoji70 t1_ja9xchz wrote

If the Internet has taught me anything, its that for whatever ridiculous 100% dumbest take you can imagine, you can def find a credentialed professional who holds that opinion. Its often unclear whether they hold that opinion for attention or notoriety or just for character defects.

2

admirelurk t1_ja9wy95 wrote

I counter that many developers of ML have a too narrow definition of what constitutes danger. Sure, chatGPT will not go rogue and start killing people, but the technology affects society in much more subtle ways that are hard to predict.

6

yaosio t1_ja9rvvw wrote

It's only considered dangerous because individuals can do what companies and governments have done for a long time. What took teams of people to create plausible lies can now be done by one person. When somebody says AI is dangerous all I hear is they want to keep the power to lie in the hands of the powerful.

5

bluebolt789 t1_ja9mbyc wrote

Yeah I am not looking for a definitive answer, because as you said the only way to know for sure is to try and evaluate the performance.

I’m just trying to gauge whether it’s a “yeah very unlikely to work” or “seems promising, try it”. I have read an interesting paper that suggest filtering the sentences with a domain dictionary created from the training set before passing it to a pre-trained model. These kind of ideas is what I am looking for!

Unfortunately manually labeling the ticket data to get a benchmark is not something I can do, or that of course would be my first test too.

1

walk-the-rock t1_ja9m7dr wrote

> requirement of a license to use AI like chatGPT since it's "potentially dangerous"

guess we need a license to use sophisticated technology like Python, C++, Java, shell scripts, Excel... anything that executes code and makes machines do stuff.

You could implement the math for a resnet in an excel spreadsheet (I'm not recommending this).

2

redflexer t1_ja9locb wrote

This is not at all how ethic boards operate. They very rarely make decisions themselves but define the parameters within which an ethical decision can be made (e.g. what aspects need to be considered and weighted against each other, who needs to be heard, etc.). If you had other experiences, this is not representative for the majority of boards.

2