Recent comments in /f/MachineLearning

melodyze t1_j7j6h6t wrote

The Lamda paper has some interesting sidelines at the end about training the model to dynamically query a knowledge graph for context at inference time and stitch the result back in, to retrieve ground truth, which may also allow the state change at runtime without requiring constant retraining.

They are better positioned to deal with that problem than chatgpt, as they already maintain what is almost certainly the world's most complete and well maintained knowledge graph.

But yeah, while I doubt they have the confidence they would really want there, I would be pretty shocked if their tool wasn't considerably better at not being wrong on factual claims.

1

drooobie t1_j7j5ubo wrote

The voice assistants Google Home / Alexa / Siri are certainly made obsolete by ChatGPT, but I'm not so sure about search. There is definitely a distinction between "find me an answer" and "tell me an answer", so it will be interesting to see the differences between ChatGPT and whatever Google spits out for search.

4

username-requirement t1_j7j5ihu wrote

The critical factor to consider is whether the computation spends time in the python code or C/C++.

Many of the python language constructs are quite slow, and this is why libraries like numpy exist. The program spends relatively little time in the python code which is merely acting as an interpreted, rapid-to-modify "glue" between the compiled C/C++ library functions.

In the case of tensorflow and pytorch virtually all the computation is being done in C/C++ and python is basically acting as a highly flexible configuration language to do setup.

18

chiaboy t1_j7j2bwp wrote

I agree.

They weren’t shocked per se, however clearly OAI is on their radar.

Not entirely unlike during COVID when Xoom taught most Americans about web conferencing. Arguably good for the entire space, but the company in the public imagination probably didn’t deserve all the accolades.

So the question for Google and other responsible AI companies, is how to capitalize on the consumer awareness/adoption, but do it in a way that acknowledges the real constraints (that OAI are less concerned with). MSFT is all ready running into some of those constraints viz the partnership (interesting to see Sataya get over his skis a little. That’s not his usual MO).

4

jlaw54 t1_j7j1k33 wrote

I agree with threads of what you are saying here.

That said, I think they were “prepared” for this in a very theoretical and abstract sense. I don’t think they were running around like fools at google hq aimlessly.

But that doesn’t mean it didn’t inherently create a shock to their system in real terms. Both can have some truth. Humans trend towards black and white absolutes, when the ground truth is most often grey.

1

currentscurrents t1_j7iy068 wrote

The exception Google Images got is pretty narrow and only applies to their role as a search engine. Fair use is complex, depends on a lot of case law, and involves balancing several factors.

One of the factors is "whether your use deprives the copyright owner of income or undermines a new or potential market for the copyrighted work." Google Image thumbnails clearly don't compete with the original work, but generative AI arguably does - the fact that it could automate art production is one of the coolest things about it.

That said, this is only one of several factors, so it's not a slam dunk for Getty either. The most important factor is how much you borrow from the original work. AI image generators borrow only abstract concepts like style, while Google was reproducing thumbnails of entire works.

Anybody who thinks they know how the courts will rule on this is lying to themselves.

22

Red-Portal t1_j7ixd0h wrote

High dimensionality does not necessarily mean more complex. In fact, it has been known for quite a while that going to higher dimensions makes various problems easier; non-linearly separable datasets suddenly become separable in higher dimensions for example. Turning this to 11, you basically get kernel machines. Kernels embed the data into potentially infinite dimensional spaces, and that has been very successful before deep learning took over.

3