Recent comments in /f/MachineLearning

ggf31416 t1_j741sxn wrote

One possibility is GPU acceleration using the cuML framework, but if you are must use a specific framework like sklearn it won't be feasible. https://medium.com/rapids-ai/accelerating-random-forests-up-to-45x-using-cuml-dfb782a31bea

There are some alternatives such as Google, AWS, Gradient, you may be able to get student credits. Also, even if you don't need a GPU, you can rent an instances with many CPU cores at Vast.ai for cheap (even with the GPU it's cheaper than a CPU only AWS instance with the same amount of cores), for example the cheapest instance with 16vCPU is < $0.20/hour and only needs a credit card. The main issue with vast.ai is that you should save your results before shutting down the instance because they are tied to the machine which may become unavailable.

1

Jurph t1_j73ozbe wrote

Hey, I dove into "Progressive Growing of GANs" without knowing what weights were. And now here I am, four or five years later. I've trained my own classifiers based on ViTs, DNNs, written python interfaces for them, and I'm working on tooling to make Automatic1111's GUI behave better with Stable Diffusion. We've all got to start somewhere.

3

asarig_ OP t1_j73g1ne wrote

Reply to comment by gdpoc in [R] Graph Mixer Networks by asarig_

Thanks for your interest. If you open an issue on GitHub about this, I will keep it in mind as a reminder, and I can share pre-trained weights at the appropriate time.

2

seattleite849 OP t1_j737hp8 wrote

I got a bunch of credits from cloud hosting providers haha. Also since this is a beta I wanted a generous free tier. To connect with banana.dev, you would need to sign up for your own account and pass in your API key to the Python function that’s getting run on cakework. Thin

2

gdpoc t1_j7337zm wrote

Reply to comment by asarig_ in [R] Graph Mixer Networks by asarig_

That is fascinating work.

I'd like to read the paper and will, given the time; are the results promising?

It seems reasonable that a graph with a small branching factor could reasonably replicate logarithmic search complexity of the input space to at least some extent; I'm very interested in exploring this space.

2