Recent comments in /f/MachineLearning

mr_birrd t1_j6xo5qk wrote

No it doesn't raise ethical concerns. You literally have to agree about usage about your data and at least in Europe should be able to opt out of everything if you want. You should 100% know this, those are the rules of the game. Just cause you don't read the terms of agreements doesn't make it unethical for companies to read your data. Sure if you then use it for insurances that won't help you cause you will become sick w.h.p. that's another thing. But don't act surprised.

1

Monoranos t1_j6xny7i wrote

Also to respond to your "to young and inexperienced" was not necessary for this debate. it gives the impression that you just want to insult me which shows a lack of maturity.

And also, maybe you should keep up to date with the legality of this mather (GDPR: Explicit consent). But hey, maybe you're to old or ignorant in this domain — or both? :)

1

iqisoverrated t1_j6xnak9 wrote

>, your bot doesn't need to always play perfectly to not be detected

I'm pretty sure that current detection methods use a closeness metric (you can't use a "perfect GTO" metric because that would mean your observation horizon would have to be infinitely long)

> What tools would a poker TO employ?

Well, the simplest tool to start with would be preflop charts. And then solver charts for the usual betting sizes. At least that's where I would start if I were to implement such a system.

1

Hyper1on t1_j6xn5do wrote

> AI21's Jurassic 178B seems to be comparable to GPT3 davinci 001.

This is actually a compliment to AI21, since davinci001 is fine-tuned from original 175B davinci on human feedback over generations:

https://platform.openai.com/docs/model-index-for-researchers

The better comparison is with plain davinci, and you would expect 001 to be better and 003 to be significantly better (the latter is trained with RLHF).

There are currently no open source RLHF models to compete with davinci 003, but this will change in 2023.

1

iqisoverrated t1_j6xm9ja wrote

Sure. They will get smarter with time. And the algos to detect them will take longer. That's the nature of evolution (pruning the stupid bots by banning them leaves the smarter bots)

So maybe they will have deviate so much eventually that they get beatable. In which case they don't fulfill their purpose anymore.

Sorta reminds me of this xkcd comic:

https://xkcd.com/810/

1

Hyper1on t1_j6xm8m9 wrote

This is a fine approach, but it's not necessarily chain of thought if you move the actual problem solving outside of the LM. The entire point of Chain of Thought as originally conceived is that it's a better way of doing within-model problem solving. I would be interested to see the result if you were to finetune the LM on a dataset of reasoning from this approach, however.

2

Monoranos t1_j6xlrzq wrote

While it is true that much of the data used to train these models is sourced from publicly available sources, it's also true that much of this data was generated by individuals who may not have been fully aware of the implications or intended uses of their contributions. The question of who owns this data and how it can be used is an important one, and it's understandable that some people might feel uncomfortable about the potential for profit to be made from it. It's important to have a conversation about ethical considerations in the development and deployment of large language models.

0

Monoranos t1_j6xlf7p wrote

I understand your point, but it's important to consider the ethics of using data that was gathered without explicit consent or understanding of how it would be used. Just because it's technically allowed under terms and conditions, doesn't mean it's morally right. Companies have a responsibility to ensure that they use data in a responsible and ethical manner, rather than solely relying on the legality of the terms and conditions.

−1