Recent comments in /f/MachineLearning

klop2031 t1_j7qcak1 wrote

Take a gander here: https://youtu.be/G08hY8dSrUY At min 8 and 9 sec Seems like no one knows how scotus will deal with it but a good argument is that an AI is experiencing are like humans and generates new work by mixing in its skill.

Further, it seems like the law may only differentiate it by the intelligences' physical makeup.

And to be honest, it seems like the only ppl mad about generative networks producing art are the artists about to lose their jobs.

Who cares if an AI can create art, if one only cares about the creative aspect then the human can make art too, no one is stopping them. But really its about money.

0

currentscurrents t1_j7q8q5v wrote

So far nobody's figured out a good way to train them.

You can't easily do backprop, but you wouldn't want to anyway - the goal of SNNs is to run on ultra-low-power analog computers. For this you need local learning, where neurons can learn by communicating only with adjacent neurons. There's some ideas (forward-forward learning, predictive coding, etc) but so far nothing is as good as backprop.

There's a bit of a chicken-and-egg problem too. Without a good way to train SNNs, there's little interest in the specialized hardware - and without the hardware, there's little interest in good ways to train them. You can emulate them on regular computers but that removes all their benefits.

3

answersareallyouneed t1_j7q83qv wrote

I’d add lecture(s) talking about MAP/MLE, bias-variance trade off, and model interpretability, common pitfalls (Eg. Concept drift), and (maybe) building ml systems.

I’d skip the lectures on reinforcement learning and gans and maybe add a lecture on recommender systems. I’d say you need quite a bit of knowledge on both of these topics before you can actually solve real/practical problems.

Honestly, 16 weeks isn’t a lot of time to learn/digest all of this material in depth. I’d focus a lot more on the practical.

1

z_fi t1_j7q0h1g wrote

A typical machine learning curriculum should cover the following topics:

Introduction to machine learning

Linear Regression

Logistic Regression

Decision Trees and Random Forests

Naive Bayes

k-Nearest Neighbors (k-NN)

Support Vector Machines (SVMs)

Neural Networks

Convolutional Neural Networks (CNNs)

Recurrent Neural Networks (RNNs)

Generative Adversarial Networks (GANs)

Clustering (K-means, Hierarchical)

Dimensionality Reduction (PCA, t-SNE)

Ensemble Methods

Model evaluation and selection

Hyperparameter tuning

Regularization

Bias-Variance Trade-off

Overfitting and Underfitting

Model interpretability and explainability

1

sonofmath t1_j7q03db wrote

Reply to comment by mr_house7 in [D] List of RL Papers by C_l3b

Well.. kind of. Now for courses I would recommend Silver's course, followed by Levine's course, which are both available on youtube (besides reading the Sutton-Barto book). But besides the reading list, it also provides a detailed explaination of the most important model-free algorithms, as well as code implementations that are supposed to be as easy to understand as possible. Now if you want performent code for research/personal projects, I would not recommend SpinningUp, but it is a great way to learn how they are implemented.

4

iron_proxy t1_j7pyvou wrote

I woukd start with kaggle's RL course. Its a good into and has links to david silver's lecture series and sutton and barto's text book. Both are excellent intoductions to rl theory

5

VelveteenAmbush t1_j7pxngk wrote

Yes, 100% agree. This "can we coerce the model into saying something bad" is just a game that journalists play to catastrophize new technology and juice their engagement metrics. There's bad stuff on the internet, too, and you can find it with search engines. We still use search engines because they're incredibly useful.

The embarrassing part is that Google was so afraid of these BS stories that they kept LaMDA stuck in a warehouse for over two years while OpenAI and Microsoft lapped them.

7

vannak139 t1_j7puon5 wrote

How you would approach this really depends on a few things. The most important question is, do you have the target data you want to get out of the network? It is possible, in some cases, to highlight regions of interest using only sample-level classification data. However, this usually is very context specific. If you have target data where these regions are already specified, a normal supervised learning method for wave forms should be perfectly workable, and will likely use 1D CNNs.

2

ElectroNight t1_j7poggj wrote

Meh, size of research team does not strongly correlate outcome quality and innovation. Furthermore bulky teams can reinforce momentum on a certain approach that turns into a dead end long term. Meanwhile small teams elsewhere start from a completely orthogonal approach and sometimes truly innovate. I'm not convinced Google has the right approach for the long term, organizationally or technically. Not saying ChatGPT is a Google killer either, yet.

0