Recent comments in /f/MachineLearning
hellbattt t1_j9vhtlg wrote
Reply to comment by [deleted] in [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
You say you have no background in math in title but says you have studied it in uni and even done a thesis in stats.
[deleted] OP t1_j9vhnj1 wrote
Reply to comment by Adventurous_Memory18 in [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
[deleted]
Adventurous_Memory18 t1_j9vgugu wrote
Reply to [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
You’re getting downvoted because you clearly do have stats background when you say you don’t. Go for the interview, be honest. You don’t have to know everything you just have to show you can learn and problem solve.
[deleted] OP t1_j9ve3k7 wrote
Reply to comment by AquaBadger in [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
[deleted]
AquaBadger t1_j9vdu75 wrote
Reply to [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
Congrats on making it through so far! If you don't mind what is the job's expectations for you?
filipposML t1_j9vc6dw wrote
Reply to comment by mosquitoLad in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
Indeed, the generative model produces data points, and the discriminative one classifies them together with the real data. I think that for your purposes it is easier to refer to your algorithm as "adversarial in nature". You are using games where the algorithms are expected to reach a Nash equilibrium, but also there is no gradient (presumably) from one agent to another.
bmunday131 t1_j9vc362 wrote
Reply to [P] What are the latest "out of the box solutions" for deploying the very large LLMs as API endpoints? by johnhopiler
A Chassis + Modzy solution could get these models up and running as endpoints in a couple days max.
Here are some docs links and if at all interested, feel free to message me separately. Happy to discuss in more detail.
https://chassis.ml/
https://docs.modzy.com/docs/hugging-face
MinaKovacs t1_j9vbv7v wrote
Reply to [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
It depends on the job. Not all ML jobs involve building new low level tools. You might find it is 80% dataset classification and 20% Python code customization, with little or no statistics background required.
mosquitoLad OP t1_j9vazbs wrote
Reply to comment by filipposML in [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
My loose understanding of GANs is that one agent creates assets i.e. images and audio, while another agent attempts to differentiate assets based on if they were or weren't created by an agent. The results create automatically labeled data that can be used in subsequent training cycles, optimally leading to higher quality asset output.
I'm mixed about the IPM label. Predictability Minimization seems okay by itself; Inverse seems tacked on. Maybe something like Counter Predictability Exploitation?
[deleted] OP t1_j9vawpw wrote
Reply to [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
[deleted]
[deleted] OP t1_j9vaa3u wrote
Reply to comment by starfries in [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
[deleted]
[deleted] OP t1_j9va661 wrote
Reply to comment by starfries in [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
[deleted]
starfries t1_j9v9x20 wrote
Reply to [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
Of course you should go. And depending on the company and the role they actually have in mind that amount of math background could be plenty.
osedao OP t1_j9v4ip5 wrote
Reply to comment by Maximum-Ruin-9590 in [D] Is validation set necessary for non-neural network models, too? by osedao
Yeah that make sense to test models with folds never seen. But I have a small dataset, I’m trying to find the best practice
memberjan6 t1_j9v3ay9 wrote
Reply to [D] A funny story from my interview by nobody0014
There was a time i was meeting with a new dev and "she" was the focus of his explanations, which were pretty long winded. I didn't get a chance to interrupt his monologue. I was spending too many cycles trying to go back in his words, while he was speaking, to try to determine who she is.
Years later it occurred to me he was being FANCY by calling his code "she" the whole time. I didn't pick up anything meaningful from his text consequently.
It pays to speak plainly.
memberjan6 t1_j9v2aob wrote
Reply to comment by johnsmithbonds8 in [D] A funny story from my interview by nobody0014
Your ask is pretty big TBF.
Maximum-Ruin-9590 t1_j9v03zg wrote
Reply to comment by Maximum-Ruin-9590 in [D] Is validation set necessary for non-neural network models, too? by osedao
As mentioned u need validation sets aka some kind of folds for most things in ML. Crossvalidation and tuning just to name some things. It is also smart to have folds to compare different models with each other.
Maximum-Ruin-9590 t1_j9uzp6m wrote
My coworker has just one dataset and does cross validation, tuning and comparing on train. He gets pretty good metrics that way.
Lyscanthrope t1_j9uz4bb wrote
Simple answer: yes, of course! Middle ground: of you gave any hyper parameters to choose, you need a validation set! More detailed answer: it is very probable depending on the assumption that you have on your data. Choosing how to do the model selection will lead to how you estimate the model performance (ie the way you estimate the generalisation error)... Lot of work can go in here! Edit: this is my humble opinion but one should always think on how to validate performances before modeling... It saves a lot of time. And please, always know you basic (statistic wise)
bohreffect t1_j9uy9ko wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>What about when ChatGPT
I mean, we're facing even more important dilemas right now, with ChatGPT's saftey rails. What is it allowed to talk about, or not? What truths are verbotten?
If the plurality of Internet content is written by these sorts of algorithms, that have hardcoded "safety" layers, then dream of truly open access to information that was the Internet will be that much closer to death.
Imnimo t1_j9ux0jn wrote
Reply to comment by Jinoc in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Well, I don't really think this is a semantic disagreement. I'm using their definition of the term.
If the issue is the danger of an AI arms race, what does a poorly-trained model have to do with it? Isn't the danger supposed to be that the model will be too strong, not too weak?
icedrift t1_j9uwrfk wrote
icedrift t1_j9uwkrx wrote
Reply to comment by CactusOnFire in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I agree with all of this but it's already been done. Social media platforms already use engagement driven algorithms that instrumentally arrive at recommending reactive content.
Cambridge analytica also famously preyed on user demographics to feed carefully tailored propaganda to swing states in the 2016 election.
theLastNenUser t1_j9uwhcd wrote
Reply to comment by Desticheq in [P] What are the latest "out of the box solutions" for deploying the very large LLMs as API endpoints? by johnhopiler
You will have to message them if you want to use the larger GPU boxes, and the autoscaling isn’t great for larger models. The customizability of the “handler.py” file is nice though
VirtualHat t1_j9vkpgd wrote
Reply to comment by Jinoc in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
That's a good question. To be clear, I believe there is a risk of an extinction-level event, just that it's unlikely. My thinking goes like this.
I think the most likely outcome is that there will be serious negative implications of AI (along with some great ones) but that they will be recoverable.
I also think some people overestimate how 'super' a superintelligence can be and how unstoppable an advanced AI would be. In a game like chess or Go, a superior player can win 100% of the time. But in a game with chance and imperfect information, a relatively weak player can occasionally beat a much stronger player. The world we live in is one of chance and imperfect information, which limits any agent's control over the outcomes. This makes EYs 'AI didn't stop at human-level for Go' analogy less relevant.