Recent comments in /f/MachineLearning

milleniumsentry t1_j95v74v wrote

I disagree. They are completely related, and directly to the black box problem.

I wish I found this article a month ago, because it sums up a lot of the 'ai's are unknowable' nonsense.

Being a blackbox, is not an inherent quality of an AI. It's an inherent quality of a badly designed AI. Eventually, we will have methods that allow us to query why a particular result was given.

They are unknowable, because we have not designed them to be. The tech is in it's infancy. Give it time.

2

tdls_to t1_j95pqh9 wrote

according to open AI TOS you can send them an email to opt out of them using your prompts for model training because the right to the "content" you send to their API is perpetually yours. they also license the outputs to you to use as see fit (since you're paying them to use the service). so, on paper, you can use it for "serious" purposes without an issue. that said, the legal aspects of this whole thing is still work in progress and I strongly suggest you discuss the implications with your internal legal team before sending any sensitive company info

5

LegendOfHiddnTempl OP t1_j95ok8m wrote

>We present a general framework for the garment animation problem through unsupervised deep learning inspired in physically based simulation. Existing trends in the literature already explore this possibility. Nonetheless, these approaches do not handle cloth dynamics. Here, we propose the first methodology able to learn realistic cloth dynamics unsupervisedly, and henceforth, a general formulation for neural cloth simulation. The key to achieve this is to adapt an existing optimization scheme for motion from simulation based methodologies to deep learning. Then, analyzing the nature of the problem, we devise an architecture able to automatically disentangle static and dynamic cloth subspaces by design. We will show how this improves model performance. Additionally, this opens the possibility of a novel motion augmentation technique that greatly improves generalization. Finally, we show it also allows to control the level of motion in the predictions. This is a useful, never seen before, tool for artists. We provide of detailed analysis of the problem to establish the bases of neural cloth simulation and guide future research into the specifics of this domain. arxiv.org
> >github.com/hbertiche/NeuralClothSim

31

overactor t1_j95oem0 wrote

Reply to comment by KPTN25 in [D] Please stop by [deleted]

Your ridiculous hyperbole is not helping your argument. It's entirely possible that sentience is an instrumental goal for achieving a certain level of text prediction. And I don't see why a sufficiently large LLM definitely couldn't achieve it. It could be that another few paradigm shifts will be needed, but it could also be an we need to do is scaling up. I think anyone who claims to know if LLMs can achieve sentience is either ignorant or lying.

1

guaranteednotabot t1_j95m1yr wrote

How is the cost of queries to AI tools such as ChatGPT determined?

Sorry for the beginner question, but I keep seeing numbers such as 2 cents per query being quoted for a ChatGPT query.
How much processing power is required to complete a query? Does it scale with the number of parameters - or does number of parameters only affect memory usage?

1

nanashi500 t1_j95l3xx wrote

You can’t ask for this to stop because:

  1. Not everybody is knowledgeable
  2. Not everybody is smart

That being said, the questions can become a bother to answer over time, so I just pick and choose if and when I want to respond.

1

KPTN25 t1_j95kx5j wrote

Reply to comment by overactor in [D] Please stop by [deleted]

Reproducing language is a very different problem than true thought or self-awareness, is why.

LLMs are no more likely to become sentient than a linear regression or random forest model. Frankly, they're no more likely than a peanut butter sandwich to achieve sentience.

Is it possible that we've bungled our study of peanut butter sandwiches so badly that we may have missed some incredible sentience-granting mechanism? I guess, but it's so absurd and infinitesimal it's not worth considering or entertaining practically.

The black box argument is intellectually lazy. We have a better understanding of what is happening in LLMs and other models than most clickbaity headlines imply.

1

overactor t1_j95hrop wrote

Reply to comment by KPTN25 in [D] Please stop by [deleted]

I really don't think you can say that with such confidence. If you were saying they no existing LLMs have achieved sentience and they can't at the scale we're working today, I'd agree, but I really don't see how you can be so sure that increasing the size and training data couldn't result in sentience somewhere down the line.

1

gopher9 t1_j95hafv wrote

Neural networks are by design black boxes. You get great performance in exchange of explainability. This does not mean though that you have no control over the result.

> Example Stable Diffusion. You don't like what the eyes look like, yet you don't know how to make them more realistic.

ControlNet allows to guide image generation: https://github.com/lllyasviel/ControlNet.

> Example NLP. The chatbot does not give you logical answers? Try another random model.

Or give it some examples and ask to reason step by step. Alternatively, finetune it on examples. You can also teach LLM to use external tools, thus avoiding using LLM for reasoning.

18

DigThatData t1_j95gxlf wrote

Reply to comment by maxToTheJ in [D] Please stop by [deleted]

i think something changed in the past week though. /r/MLQuestions has recently been getting a lot of "can you recommend a free AI app that does <generic thing>?". I'm wondering if there was a news piece that went viral or something that turned a new flood of people on to what's been happening in AI or something like that.

1