Recent comments in /f/MachineLearning
lucellent t1_j5feo8g wrote
Reply to [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
I think you're a bit late on the news. OpenAI have already said they will add a watermark to their ChatGPT responses.
link0007 t1_j5fdzd6 wrote
Reply to [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
I wonder if advancements in text watermarking will actually help create watermarks for e.g. sensitive or classified governmental/corporate documents. Would allow for instant identification who leaked a certain document if you could trace the watermark back to user accounts.
[deleted] t1_j5fdv9z wrote
Reply to comment by BitterAd9531 in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
[removed]
andreichiffa t1_j5fd581 wrote
Reply to [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
They kinda exist (eg GPT2 detector from Hugging Face), based off the data that they trained on (which is the limiting factor). However, ultimately every model can be modified (fine-tuned) to evade them. Even for large models (>7B parameters), it can be done reasonably fast on commodity hardware these days.
serverrack3349b t1_j5fce73 wrote
Reply to comment by SpoonBender900 in [D] Simple Questions Thread by AutoModerator
National and governmental websites, university websites, Kaggle, r/datasets, YouTube and Twitter APIs, papers with code website. These are some of my favorite places to find stuff
BitterAd9531 t1_j5fcby3 wrote
Reply to comment by [deleted] in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
>If you think, you can take two watermarked LLMs and 'trivially" combine their output as you stated, explain in detail how you do that in an automated way.
No thank you, I'm not going to write an LLM from scratch for a Reddit argument. And FWIW, I suspect that even if I did, you'd find some way to convince yourself that you're not wrong. You not understanding how this works doesn't impact me nearly enough to care that much. Have a good one.
serverrack3349b t1_j5fc250 wrote
Reply to comment by morecoffeemore in [D] Simple Questions Thread by AutoModerator
In a sense it is just copying and pasting from the web just in a different order, but I get that that is not your question. Something I would try is to use plagiarism checking sites online to see if there is an exact copy of your text online. If there is than you should be able to either attribute it to the right person or re write it a bit so it is not plagiarism
Historical-Coat5318 t1_j5fbhj5 wrote
Reply to comment by BitterAd9531 in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
If by fighting technological progress you mean controlling it to make sure it serves humanity in the safest most optimal way then yes, we've been doing this forever, when cars were first introduced traffic police didn't exist. There is nothing retrograde or luddite in thinking this way, it's what we've always done.
Obviously watermarking is futile but there are other methods that need to be considered which no one even entertains, for example the ones I mentioned in my first comment.
Also it should be trivially obvious that AI should never be open-source. That's the worst possible idea.
BitterAd9531 t1_j5fal5s wrote
Reply to comment by Historical-Coat5318 in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
>no one seems to be even considering dealing with it in a serious way
Everyone has considered dealing with it, but everyone who understands the technology behind them also knows that it's futile in the long term. The whole point of these LLMs it to mimic human writing as closely as possible and the more they succeed, the more difficult it becomes to detect. They can be used to output both more precise and more variated text.
Countermeasures like watermarks will be trivial to circumvent while at the same time restricting the capabilities and performance of these models. And that's ignoring the elephant in the room, which is that once open-source models come out, it won't matter at all.
>this is the most pressing ethical issue in AI safety today
Why? It's been long known that the difference between AI and human capabilities will diminish over time. This is simply the direction we're going. Maybe it's time to adapt instead of trying to fight something inevitable. Fighting technological progress has never worked before.
People banking on being able to distinguish between AI and humans will be in for a bad time the coming few years.
morebikesthanbrains t1_j5fa9uq wrote
Reply to [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
This is like adding a watermark to early calculators. You can't stop evolution; the cat's out the bag. People are going to move on from critical thinking for better or worse, and humans will evolve to a higher level of masturbation.
Don't check this post for a watermark.
twiztidsoulz t1_j5fa57q wrote
Reply to comment by Historical-Coat5318 in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Ever used DocuSign or ssl?
londons_explorer t1_j5fa3eq wrote
Reply to [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
OpenAI keep a big database of all the output.
That in itself serves the same purpose of a watermark.
OpenAI can take any bit of text, and search their database and see if it came from their service.
[deleted] t1_j5f9weh wrote
Reply to comment by BitterAd9531 in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
[removed]
Historical-Coat5318 t1_j5f8ruz wrote
Reply to comment by dineNshine in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
> prove authenticity by digital signatures
Could you expand on this?
Historical-Coat5318 t1_j5f88m7 wrote
Reply to comment by BitterAd9531 in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
It seems to me ethically imperative to be able to discern human text from AI text, so it's really concerning when people just hand-wave it away immediately as obviously futile, like Altman did in a recent interview. Obviously these detection methods would have to be more robust than just a cryptographic key that can be easily circumvented just by changing a few words, but this is the most pressing ethical issue in AI safety today and no one seems to be even considering dealing with it in a serious way.
One idea: Couldn't you just train the AI to identify minor changes to the text to the point where rewriting it would be too much of a hassle? Also, open the server history under a homonymous (for privacy concerns) database so that everyone has access to all GPT (and all other LLMs) output and couple that with the cryptographic key Scott Aaronson introduced plus adversarial solutions for re-worded text. This with other additional safety features would make it too much of hassle for anyone to try to bypass it, maybe an additional infinitesimal cost to every GPT output to counteract spam, etc etc. A lot of regulation is needed for something so potentially disruptive.
BitterAd9531 t1_j5f5olr wrote
Reply to comment by [deleted] in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
I think you are misunderstanding how these watermarks work. The watermark is encoded in the tokens used and so combining or rewriting will weaken the watermark to the point it can no longer be used to accurately detect. Robust means a few tokens may be changed, but changing enough tokens will have an impact eventually.
The semantics don't change because in language, there are multiple ways to describe the same thing without using the same (order of) words. That's literally what "rewriting" means.
sabertoothedhedgehog t1_j5f4ssp wrote
Reply to comment by sabertoothedhedgehog in ChatGPT is not all you need [R] by EduCGM
[deleted] t1_j5f3k9f wrote
Reply to comment by BitterAd9531 in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
[removed]
BitterAd9531 t1_j5f2nk5 wrote
Reply to comment by [deleted] in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
I know about OpenAI's research into watermarking. It doesn't contradict anything I said. It's only a matter of time before more models appear and the researchers themselves talk about how it's defeatable by both humans and other models through combinations and rewriting.
[deleted] t1_j5f1x9y wrote
Reply to comment by BitterAd9531 in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
[removed]
stanteal t1_j5f0jaw wrote
Reply to comment by arararagi_vamp in [D] Simple Questions Thread by AutoModerator
As you have said you would need a variable amount of outputs which is not feasible in a CNN. However, you could divide the image into a grid and make predictions of the probability of the center of a circle is within each grid and their x and y offsets . Not sure if there are better resources available, but it might be worth looking at how YOLO or YOLO2 implemented their outputs.
conchoso t1_j5ez9w0 wrote
Reply to [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
For comparison, the Stable Diffusion-based AI models that generate prompted images DO have an invisible but detectable watermark embedded by default in those hundreds of dreamed up images that get posted to reddit now everyday ... but they included an option to turn it off. Steganography is far further along in digital images than plain text though...
MysteryInc152 t1_j5eyfnm wrote
Reply to [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Any watermark that couldn't easily be bypassed (paraphrase, switching out words every nth word etc) would cripple the output of the model. In fact even the simple watermarks could have weird effects on output.
gunshoes t1_j5ey2s1 wrote
Reply to comment by damc4 in [D]Can a bachelor get a job in ML? by alphapussycat
No, many hiring managers will use arbitrary criteria to reduce the number of applicants they need to evaluate for a job. Degree requirements are one of those. While yes, there probably are a few people who are in ML jobs without meeting degree requirements, in general, you're going to struggle without them.
EmmyNoetherRing t1_j5ffiqv wrote
Reply to comment by Advanced-Hedgehog-95 in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
The company wants to be able to identify their own output when they see it in the wild, so they can filter it out when they’re grabbing training data. You don’t want the thing talking to itself.