Recent comments in /f/MachineLearning

londons_explorer t1_j9ft1x2 wrote

> That's not true, and has already been shown to be false by Sydney going off on users who seemed to doing harmless chats.

The screenshoted chats never include the start... I suspect at the start of the conversation I suspect they said something to trigger this behaviour.

3

imaginethezmell t1_j9fsdtm wrote

do you know how to check for these errors

i added the keys and then tried to send a prompt, and it gave that error

seems to be sending too many requests at once to the openai api hitting a rate limit, after 1 request?

> Failed to load resource: the server responded with a status of 429 ()

" An error occurred. :(

An error occurred. :(

An error occurred. :(

An error occurred. :(

An error occurred. :(

An error occurred. :(

An error occurred. :(

An error occurred. :( "

1

Red-Portal t1_j9fqeq2 wrote

Depends on the area of focus. If you're a Bayesian machine learning, statistical learning, optimization person, AISTATS is the way to go. It's not just about prestige, it's just a better experience. The review is less noisy, the venue itself is more focused. It just feels like home. If you're more of an AI person than ML, than AAAI is probably more suited.

2

I_will_delete_myself OP t1_j9fp5fh wrote

Try looking into if they have an API. shutdown is rare, but it happens so I only ran into it once. Having the cloud on your mobile device is great, it allows you to check anywhere and do some simple things quickly.

1

I_will_delete_myself OP t1_j9fodao wrote

>Can you recommend a tutorial or something that explains the steps to move from (e.g. pytorch) training on your own machine to training that model in the Cloud (e.g. AWS)?

Same as running on your own machine.

>What type of instances to chose, how/where to store data, making sure Nvidia/CUDA stuff is working properly, etc.?

Just look up a EC2 or VM that has the gpu you want and there you go. nvidia-smi is the command that should tell you the gpu you have. It's working if it outputs the GPU you have. I would suggest checking in the code if CUDA is running.

I prefer to use a EC2 or VM because it's normally cheaper, but you have to do your own research on pricing. Cloud is a competitive market, so there is always someone ready to offer a A100 at a cheaper price. Lambda Cloud I heard was super cheap for on demand.

1

pyepyepie t1_j9fn745 wrote

> Differential Equations

I have a (somewhat) strong math background (studied many math courses with the math department of the universities I studied at) and a strong SW background (web and then MLE for a few years) - however, I have never used or studied Differential Equations (god knows why). I understand quite deeply how calculus and linear algebra are related to neural networks, and probability is related to the field everywhere by definition - but could you explain to me when you need knowledge of Differential Equations? I ask it due to my ignorance, again - I have never studied it. Could you link it to ML concepts which I probably don't understand well due to my ignorance? Also, I would add optimization to the answer :)

Edit: also 2 - how deeply would you suggest to learn it? https://www.youtube.com/watch?v=9fQkLQZe3u8 what do you think about this one?

1

Optimal-Asshole t1_j9fktzg wrote

> Are there actual NN methods that can PDEs without depending on the initial conditions?

The initial condition needs to be known (but we can actually have some noisy initial condition, like measurements corrupted by noise [1]), but NN based models can efficiently solve some parametric PDEs faster than traditional solvers. [2]

There is also a lot of work in training NNs on data generated from traditional methods, and this can be combined jointly with the above method to solve a whole class of problems at once. [3]

Solving a whole parametric family of PDEs (i.e. a parameterized family of initial conditions) and handling complicated geometries will be the next avenue of this specific field IMO. Actually it is being actively worked on.

[1] https://arxiv.org/abs/2205.07331

[2] https://arxiv.org/abs/2110.13361

[3] https://arxiv.org/abs/2111.03794

1

abnormal_human t1_j9fhvd5 wrote

I went through this about five years ago.

For me, the main job was learning all of the terminology and getting a feel for which techniques are used to solve what kinds of problems. At the time when I went through this, I spent many hours listening to podcasts. Just listening to people talk about the stuff helped me get a map of the territory and decide where to dive deeper.

Then as soon as I had even the slightest grasp of a possible solution to a problem in my domain, I would go try to attack it. In this early era I made hundreds of Jupyter notebooks. Each one was me spending a few hours trying out a technique on some data from my business. Some worked, some didn't, but I got a lot of experience in a short time.

I had a strong math and SWE background to begin with. If you don't, you may have some extra catching up to do. As far as math goes, Linear Algebra is the most important. Probability Theory and Differential Equations are also very applicable. Most SWE work tied to Machine Learning is pretty basic. Lots of Python, but it helps to understand how computers work because you do get into data at scale pretty often.

At this point, I've deployed many ML systems to production, they are serving hundreds of thousands of users daily, and I can keep up with experts when conversing, designing stuff, etc.

2

pyepyepie t1_j9fgpej wrote

A week is not enough dude, try a few more days and maybe you will beat the market! Tip on how you can beat 99% of the people who do ML for the stock market: search for extrapolation with machine learning, and then search how well it works. You can try "how well does extrapolation work with machine learning". If you feel lazy you can ask ChatGPT.

1