Recent comments in /f/MachineLearning

C0hentheBarbarian t1_j9o5068 wrote

I work in NLP. Work mainly consists of fine tuning NLP models. With the rise of LLMs I'm seeing a lot of my work becoming Prompt engineering. I'm happy to pick up the new skill but I'd like to know what avenues I have to upskill beyond being a prompt engineer without a PhD. Feels like all the learning I did on model architectures etc is going to waste. There are still a few projects that need me to fine tune a model for text classification etc but as LLMs get better I suspect I need better skills to go beyond becoming a prompt engineer. For anyone else in NLP who doesn't have a PhD and doesn't have any experience building model architectures/training from scratch etc, how are all of you trying to up skill in these times? EDIT: Worded the question to ask only people who don't have a PhD, I would actually like to know everyone's perspective on this.

1

formerstapes t1_j9nvvbi wrote

The only thing worst than those posts are these posts.

Obviously there's going to be noobs here who don't understand anything about ML. If you don't want to engage with them, then just don't.

If you're such a hardass you can't put up being around some noobs, just sit in your basement and read ML papers all day

1

wait_hope t1_j9ntd55 wrote

My goal is to deploy an ML model which can perform price prediction on an exchange traded fund (ETF) - which is essentially an aggregation of stocks. A very popular ETF is the S&P 500 (which is not actually the ETF I want to predict on. The one I want to predict on only has about 30 stocks).

Can an ML model trained/tested on individual stocks which are *not* in the ETF a valid way of building a model which can accomplish price prediction on an ETF?

1