Recent comments in /f/MachineLearning

MysteryInc152 t1_j8wx6tx wrote

Not very necessary. An LLMs Brain might be static itself but the connections it makes between neurons are very much dynamic. That's why in context learning is possible. LLMs already mimic meta learning and fine-tuning when you few shot.

https://arxiv.org/abs/2212.10559#:~:text=Language%20Models%20Secretly%20Perform%20Gradient%20Descent%20as%20Meta%20Optimizers,-Damai%20Dai%2C%20Yutao&text=Abstract%3A%20Large%20pretrained%20language%20models,Context%20Learning%20(ICL)%20ability.

1

bubudumbdumb t1_j8wtq42 wrote

https://www.investopedia.com/terms/b/backtesting.asp

https://en.m.wikipedia.org/wiki/Modern_portfolio_theory

With extreme synthesis :

markets are not stationary environments so you have to expect and mitigate drift. This have implications on the evaluation methodology and on the choice of time series models that can be calibrated with fewer data points.

A strategy to make money in the markets allocate capital on multiple financial instruments using multiple signals therefore the value of a signal is the predictive advantage that it provides when stacked on top of others commonly used signals. If the predictive capability of the news sentiment is easily replicated by a linear combination of cheaply available signals then it's not worth much.

1

Shinsekai21 t1_j8wrcvo wrote

>HuggingFace, FastAI and similar frameworks are designed to lower the barrier to ML, such that any person with programming skills can harness the power of SoTA ML progress.

I started out with FastAI and now learning PyTorch. I agreed.

I'm more of the top-down student (learn the practical stuff first then the fundamental). FastAI is doing great job at showing me what is possible and interesting with their lectures.

I moved to PyTorch because I wanted to understand more about whats underneath FastAI. I'm currently doing ZeroToMastery PyTorch and found that the knowledge I had with FastAI is helping alot.

−1