Recent comments in /f/MachineLearning

mjaltthrowaway t1_j7oqeo1 wrote

I suppose the first question that comes to mind for me is: what problems exist in the vaccine world (besides poor customer sentiment) that ML/AI could potentially solve or enhance? Personalization? - to someone.

I suppose the first question that comes to mind for me is: what problems exist in the vaccine world (besides poor customer sentiment) that ML/AI could potentially solve or enhance? Personalization?

Maybe OP can use a similar method of analysis.

1

TaXxER t1_j7omop6 wrote

Generative models do redistribute though, often outputting near copies:

https://openaccess.thecvf.com/content/WACV2021/papers/Tinsley_This_Face_Does_Not_Exist..._But_It_Might_Be_Yours_WACV_2021_paper.pdf

https://arxiv.org/pdf/2203.07618.pdf

Copyright does not only cover republishing, but also covers derived work. I think it is a very reasonable position to consider all generative model output o for which some training set image Xi had a particularly large influence on o, to be derived work from Xi.

Similar story holds true for code generation models and software licensing: copilot was trained on lots of software repos that had software licenses that require all derived work to be licensed under an at least equally permissive license. Copilot may very well output a specific code snippets particularly based on what it has seen in a particular repo, thereby potentially opening up the user to the obligation to the licensing constraints that come with deriving work from that repo.

I’m an applied industry ML researcher myself, and am very enthousiastic about the technology and state of ML. But I also think that as a field as a whole we have unfortunately been careless about ethical and legal aspects.

2

mikljohansson t1_j7ojjjm wrote

I have been building a PyTorch > ONNX > TFlite > TFMicro toolchain for a project to get a vision model running on an ESP32-CAM with PlatformIO and Arduino framework. Perhaps it could be of use as a reference

https://github.com/mikljohansson/mbot-vision

Some caveats to consider when embarking on this kind of project

  • PyTorch/ONNX is channels-first memory format, while tensorflow is channels-last. Converting the model with onnx-tf inserts lots of Transpose ops in the graph which decreases performance (with 3x for my model) and increased memory usage. I'm using onnx2tf module instead, which also coverts operators to channels-last

  • You may want to fully quantize the model to int8, since fp16/fp32 is really slow on smaller MCUs, especially those lacking FPUs and vector instructions. And watch out for Quantize/Dequantize ops in the converted graph, it means some op didn't support quantization so needed to be wrapped and executed (slowly) in fp16/fp32 mode

  • There may be lots of performance to gain by using hardware optimized kernels, but it depends on what MCU and what operators your model is using. E.g. for ESP32 there's ESP-NN which greatly sped up inference times for my project (2x)

https://github.com/espressif/esp-nn https://github.com/espressif/tflite-micro-esp-examples

And for really tiny MCUs there's this library which could perhaps be useful, it doesn't support so many operators but it does work in my testing for simple networks

https://github.com/sipeed/TinyMaix

  • How to figure out memory needs and performance. It's a bit trickier, I've simply been using for example torchinfo module, and the graph output and graph statistics that onnx2tf displays to see how many muls the model is using and the approximate parameter and tensor memory usage. Then I've had an improvement cycle where I've "trained" the model for 1 step, deployed it to the hardware to measure the FPS and then adjust the hyperparameters and model architecture until I have an FPS that is acceptable. Then train it fully to see if that model config can do the job. And then iterate...
2

MLRecipes OP t1_j7oec5y wrote

No, it does encompass GLM but the technique also works when there is no response (you then need to put a constraints on the parameter) or with truly non linear models with time series examples in the book. Or for particular clustering cases. I like to call it unsupervised regression, but a particular case with appropriate constraint on the parameters corresponds to classic regression. More about it here. As for shape classification, see here.

3

astrange t1_j7oduw3 wrote

This is wishful thinking. ChatGPT, being a computer program, doesn't have features it's not designed to have, and it's not designed to have this one.

(By designed, I mean has engineering and regression testing so you can trust it'll work tomorrow when they redo the model.)

I agree a fine tuned LLM can be a large part of it, but virtual assistants already have LMs and obviously don't always work that well.

2

Fit-Meet1359 t1_j7oa9yf wrote

You will be able to expand the sidebar thing, or go directly to the Chat tab, to talk to it in full screen just like ChatGPT. The search page sidebar is only there to make the new experience more visible. See https://medium.com/@owenyin/scoop-oh-the-things-youll-do-with-bing-s-chatgpt-62b42d8d7198

1

thiru_2718 t1_j7o82dn wrote

Nice work! There's some intriguing sections here that I definitly want to take a look at.

Quick question, with regards to this quote in the preface: "For instance, regression techniques ... are presented as a single method, without using advanced linear algebra."

Are you referring to Generalized Linear Models? I don't see any references to GLMs, in my brief skim, but I can't think of how else regression can be presented as a single method.

Also, is there any place where we can get a preview of "Shape Classification and Synthetization via Explainable AI" section?

6

infinity t1_j7o28ve wrote

Is it just me who finds the clunky UX over bing underwhelming? Ditto over you.com that fails to generate anything for me 50% of the times. I wish these companies spent some time thinking about the chat UX as they integrate with search. ChatGPT has a really great and simple UX, and works really great for some use cases which I really like.

4