Recent comments in /f/MachineLearning

FastestLearner t1_j5iklgu wrote

If you don't engage the second GPU, it will remain dormant, and should not automatically interfere with anything. For example if you are training a network in PyTorch without using DP or DDP, then it will use the first GPU by default. You can always change which GPU it uses using the environment variable CUDA_VISIBLE_DEVICES. Also, make sure the primary GPU occupies the first PCIe slot. You could verify this with nvidia_smi. When you have the display hooked up to it, the primary GPU will have a slightly higher memory usage (~100 MB) because of display server processes like Xorg, than all other GPUs.

2

FastestLearner t1_j5if4nz wrote

Tim Dettmers wrote about this in one of his articles. AFAIK, SLI is not required for DL (it’s a gaming thing where sync between GPUs becomes important for smooth gameplay). In DL tasks, any GPU can just wait for others to finish. So you can use any combination of any number of Nvidia GPUs as long as you can interface with them (PCIe or Ethernet). The catch is that the speed of training/inference will be limited by the weakest link in the chain, i.e. the weakest GPU will bottleneck all other GPUs. But on the flip side, you should be able to fit more data owing to the increased VRAM.

The other thing that you can do is run two different experiments on each GPU simultaneously. In that way, you can maximize the usage of your GPUs.

If you do want to fit more data on the 3080, look for pytorch plug-ins, such as deepspeed or FP16 or simply do two forward passes per backward pass, which will double your batch size.

1

BitterAd9531 t1_j5idapl wrote

>trivially obvious that AI should never be open-source

Wow. Trivially obvious? I'd very much like to know how that statement is trivially obvious, because it goes against what pretty much every single expert in this field advocates.

Obviously open-source AI brings problems, but what is the alternative? A single entity controlling one of the most disrupting technologies ever? And ignoring for a second the obvious problems with that, how would you enforce it? Criminalize open-sourcing of software? Can't say I'm a fan of this line of thinking.

5

TonyTalksBackPodcast t1_j5iblmx wrote

I think the worst possible idea is allowing a single person or handful of people to have near-total control over the future of AI, which will be the future of humanity. The process should be democratized as much as can be. Open source is one way to accomplish that, though it brings its own dangers as well

11

hey_look_its_shiny t1_j5htrp4 wrote

> Besides that, OP stated that he wants to use a llm for this, not me.

Actually, you introduced that concept first when you said:

> If u want some AI to alter the text for you, you again need a LLM.

OP had not mentioned applying an LLM to the case prior to that. It was explicit in their original comment, and implicit in all comments thereafter, that a watermark-free LLM was only one of the ways in which this problem could be tackled.

Meanwhile:

> Synonym engines wouldnt change an n-gram watermarks significantly enough as a synonym is the same type of word so there are token patterns persisting.

Right. Hence why I said they "get halfway there". Halfway is clearly not "all the way", and thus not "significantly enough".

And finally:

> Rules for r/MachineLearning > 1. Be nice: no offensive behavior, insults or attacks

In light of your recent description of an interlocutor's "limited capacity brain", you seem to be catastrophically failing at (1) understanding the problem space being discussed, (2) understanding the deficiencies in your own arguments, and (3) understanding basic norms and rules of interpersonal decency....

Just my two cents, but this forum probably isn't the right space for you until you level up a bit.

2