Recent comments in /f/MachineLearning

ComplexColor t1_j8kaasf wrote

You're not making a lot of sense. It's not clear you understand what software is, what an AI is, what encryption is. Why do you think encrypting an AI would affect it running? Do you have some specific encryption in mind?

​

Are you just high?

2

WikiSummarizerBot t1_j8jrghx wrote

Homomorphic encryption

>Homomorphic encryption is a form of encryption that allows computations to be performed on encrypted data without first having to decrypt it. The resulting computations are left in an encrypted form which, when decrypted, result in an output that is identical to that produced had the operations been performed on the unencrypted data. Homomorphic encryption can be used for privacy-preserving outsourced storage and computation. This allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1

The-Last-Lion-Turtle t1_j8jrena wrote

It could work with a heavy efficiency penalty.

https://en.m.wikipedia.org/wiki/Homomorphic_encryption

Though I don't think gradient descent will select for something like this.

Far more likely is to obfuscate how it works so while it's not encrypted we learn little with the tools we have, and would have an extremely difficult time verifying something is absent.

−1

BrotherAmazing t1_j8jd23p wrote

Yes.

“SoTA” is also often ill-defined and while important, can sometimes be a bit overhyped IMO.

Most practitioners and engineers want something that is as good as it can be or is above some threshold in accuracy, given constraints that can often be severe. If a “SoTA” approach cannot meet these real-world constraints, I would argue it’s not “SoTA” for that particular problem of interest.

If you have something that performs very well under such real-world constraints and can demonstrate value to the practitioner, it should be considered for publication by the editors.

3

SleekEagle t1_j8ix4fz wrote

Authors publish papers on research, experiments, findings, etc. They do not always release the code for the models they are studying.

The lucidrains' repos implement the models, creating an open-source implementation for the research

The next step would then be to train the model, which requires a lot more than just the code (most notably, money). I assume you're referring to these trained weights when you say "the needed AI model". Training would require a huge amount of time and money for a team, never mind a single person, to train even one of these models let alone a whole portfolio of them

For this reason, it's not very reasonable to expect lucidrains or any other person to train these models - the open-source implementations are a great contribution on their own!

9

perta1234 t1_j8iqbah wrote

There is the claim that any system can be (approximately) reverse engineered if one has access to the results of the system. Are those too hidden from the public?

What is "best" is subjective. At least I was reading last week that any moderate fitness related interest brings quite unhealthy content very quickly. But it has to be better than Amazon's system, anyway.

2

BossOfTheGame t1_j8ikmcj wrote

Because you have a small batch size, my feeling is that you probably want a very small dropout rate on the important items, if only to decrease the chance the network overfits to them. Maybe 1 / 100 batches, excludes the important item and the rest include it. But perhaps it doesn't matter.

3