Recent comments in /f/MachineLearning
[deleted] t1_j6za3es wrote
[deleted] t1_j6z9r5d wrote
Reply to comment by barneybuttloaves in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
[deleted]
visarga t1_j6z9nie wrote
Reply to comment by frequenttimetraveler in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
No, you got it worng. Today you want to sprnikle a few mistakes to signal your authenticity. It's the new cool style. Only chatGPT and copyrighting professionals have perfect grammar.
AnotherEuroWanker t1_j6z9dlm wrote
Reply to comment by bumbo-pa in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
Wait till you try running it in Firefox. It's clearly crippled on that browser.
Ronny_Jotten t1_j6z9axn wrote
Reply to comment by JigglyWiener in [R] Extracting Training Data from Diffusion Models by pm_me_your_pay_slips
> The models don’t come with buttons that do anything. They are tools capable only of what the software developers permit to enter the models and what users request.
If you prompt an AI with "Mickey Mouse" - no more effort than clicking a button - you'll get an image of Mickey Mouse that violates intellectual property laws. The image, or the instructions for producing it, is contained inside the model, because many copyrighted images were digitally copied into the training system by the organization that created the model. It's just not remotely the same thing as someone using the paintbrush tool in Photoshop to draw a picture of Mickey Mouse themselves.
> If we go down the road of regulating training and capacity to do x, you’ll have to file lawsuits against every artist on behalf of every copyright holder over the IP inside the artist’s head.
I don't think you have a grasp of copyright law. That is a tired and debunked argument. Humans are allowed to look at things, and remember them. Humans are not allowed to make copies of things using a machine - including loading digital copies into a computer to train an AI model - unless it's covered by a fair use exemption. Humans are not the same as machines, in the law, or in reality.
> These cases are going to fall apart
I don't think they will. Especially for the image-generating AIs, it's going to be difficult to prove fair use in the training, if the output is used to compete economically with artists or image owners like Getty, whose works have been scanned in, and affect the market for that work. That's one of the four major requirements for fair use.
kineticjab t1_j6z8m2i wrote
Reply to comment by LeanderKu in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
WebEx can actually automatically produce transcripts of your meetings (via transcription). Seems easy enough to parse the transcript for action items and such
Competitive-Rub-1958 t1_j6z8a7t wrote
Reply to comment by TheDeviousPanda in [D] Apple's ane-transformers - experiences? by alkibijad
For someone who simply wants to use ANE (haven't bought it, just considering) for testing out bare-bones models locally (I find remotely debugging quite frustrating) for research purposes before finally training them on cloud, how good is the support with Containerization solutions like Singularity - does it even leverage ANE?
I know the speedup won't really be anything drastic, but if it helps (is faster and more resource efficient than the CPU/GPU) then that just translates to a lower time-to-iterate anyways...
So for someone using plain PyTorch (w/ a bells and whistles), how much of a pain would it be?
Anomia_Flame t1_j6z7gg6 wrote
Reply to comment by Senior1292 in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
Like most humans do anyway?
znihilist t1_j6z78wg wrote
Reply to comment by maxToTheJ in [R] Extracting Training Data from Diffusion Models by pm_me_your_pay_slips
That's beside the point, my point is that the MP3 compression comparison doesn't work, so that line of reasoning isn't applicable. Whether one use can excuse another isn't part of the argument.
Mefaso t1_j6z6zgt wrote
Reply to comment by uhules in [D] Why is stable diffusion much smaller than predecessors? by dahdarknite
>DALL-E 2 also applies diffusion in latent space
Not really in the important part. Dalle2 uses diffusion in clip-"latent"-space and then conditions the pixel-diffusion model on the result.
However they still do a full diffusion pass in pixel-space, which is more complex than doing it in latent space, as LDMs do.
JQuilty t1_j6z6l0e wrote
Reply to comment by IWantAGrapeInMyMouth in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
Why must we wait for Q3? Our dynamic process allows us to skate the puck in real time.
bablador t1_j6z6k8m wrote
Reply to [D] I'm at a crossroads: Bayesian methods VS Reinforcement Learning, which to choose? by fuscarili
Why not both?
bring_dodo_back t1_j6z6bxm wrote
Reply to comment by fuscarili in [D] I'm at a crossroads: Bayesian methods VS Reinforcement Learning, which to choose? by fuscarili
Bayesian methods have much much more applications in the industry than reinforcement learning.
keisukegoda3804 t1_j6z5ppy wrote
Reply to [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
This is devastating to startups in the meeting transcription market. Solutions like Otter and Fireflies cost $15-20 per month and only have a fraction of the featureset of Teams Premium. Really interested to see how this develops.
YOLOBOT666 t1_j6z5gjn wrote
Reply to [D] I'm at a crossroads: Bayesian methods VS Reinforcement Learning, which to choose? by fuscarili
Depends on the RL course content, if it’s just following along the RL bible, then you could do it yourself. Checkout the syllabus/slides of previous years to get an idea. The assignments/projects is where you learn the most IMO, especially for RL.
bigabig t1_j6z3a6d wrote
Reply to [D] Why do LLMs like InstructGPT and LLM use RL to instead of supervised learning to learn from the user-ranked examples? by alpha-meta
I thought this was also because you do not need so much supervised training data because you 'just' have to train the reward model in a supervised fashion?
_Arsenie_Boca_ t1_j6z24n6 wrote
Reply to [D] Why do LLMs like InstructGPT and LLM use RL to instead of supervised learning to learn from the user-ranked examples? by alpha-meta
Since it wasnt mentioned so far: RL does not require the loss/reward to be differentiable. This enables us to learn from complete generated sentences (LM sampling is not differentiable) rather than just on token-level
fraktall t1_j6z1m35 wrote
Reply to comment by nombinoms in [R] On the Expressive Power of Geometric Graph Neural Networks by chaitjo
Damn, I had no idea, thx, will now go read papers
nicholsz t1_j6z1jgm wrote
Reply to comment by netw0rkf10w in [D] ImageNet normalization vs [-1, 1] normalization by netw0rkf10w
Oh I meant fitting to the statistics of ImageNet / the training dataset. There's always got to be some kind of normalization
netw0rkf10w OP t1_j6z15t0 wrote
Reply to comment by nicholsz in [D] ImageNet normalization vs [-1, 1] normalization by netw0rkf10w
I think normalization will be here to stay (maybe not the ImageNet one though), as it usually speeds up training.
smt1 t1_j6z14sq wrote
Reply to comment by new_name_who_dis_ in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
lots of words, low information density per sentence
netw0rkf10w OP t1_j6z0oia wrote
Reply to comment by melgor89 in [D] ImageNet normalization vs [-1, 1] normalization by netw0rkf10w
So no noticeable difference in performance in your experiments?
ProSmokerPlayer t1_j6yz9wl wrote
If you want to create a winning poker bot you need these few things.
OCR software to recognise stack sizes, position on table, cards, antes, blinds etc, all the variables.
Then it needs to translate this into something legible so that this spot can now be looked up in a GTO database.
DB gives the answer, voila you have solved poker.
ProSmokerPlayer t1_j6yyvty wrote
Reply to comment by Much_Blacksmith_1857 in [P] AI Poker/Machine Learning/Game-Theory by Much_Blacksmith_1857
Where exactly does anyone play 8-max poker? I'll never understand why people solve for 8-max its so frustrating. Also antes at over 10%. Non poker players solving poker.
visarga t1_j6zaf56 wrote
Reply to comment by TheTerrasque in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
interference is all you need