Recent comments in /f/MachineLearning
melodyze t1_j7j6h6t wrote
Reply to comment by st8ic in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
The Lamda paper has some interesting sidelines at the end about training the model to dynamically query a knowledge graph for context at inference time and stitch the result back in, to retrieve ground truth, which may also allow the state change at runtime without requiring constant retraining.
They are better positioned to deal with that problem than chatgpt, as they already maintain what is almost certainly the world's most complete and well maintained knowledge graph.
But yeah, while I doubt they have the confidence they would really want there, I would be pretty shocked if their tool wasn't considerably better at not being wrong on factual claims.
drooobie t1_j7j5ubo wrote
The voice assistants Google Home / Alexa / Siri are certainly made obsolete by ChatGPT, but I'm not so sure about search. There is definitely a distinction between "find me an answer" and "tell me an answer", so it will be interesting to see the differences between ChatGPT and whatever Google spits out for search.
username-requirement t1_j7j5ihu wrote
Reply to comment by thedarklord176 in Wouldn’t it be a good idea to bring a more energy efficient language into the ML world to reduce the insane costs a bit?[D] by thedarklord176
The critical factor to consider is whether the computation spends time in the python code or C/C++.
Many of the python language constructs are quite slow, and this is why libraries like numpy exist. The program spends relatively little time in the python code which is merely acting as an interpreted, rapid-to-modify "glue" between the compiled C/C++ library functions.
In the case of tensorflow and pytorch virtually all the computation is being done in C/C++ and python is basically acting as a highly flexible configuration language to do setup.
blablanonymous t1_j7j50ur wrote
Reply to comment by po-handz in [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
Of course they can. Ok you’re just trolling at this point. Good luck
netkcid t1_j7j4bhc wrote
Reply to comment by mugbrushteeth in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
They're oh so bad at connecting tech to the users too...
Google is about to become HotBot or Ask Jeeves or ...
supersoldierboy94 OP t1_j7j471i wrote
Reply to comment by luckymethod in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
Thanks for adding contribution to this discussion like your contribution to the field. Salute.
luckymethod t1_j7j41yx wrote
He's right and you're full of it.
[deleted] t1_j7j3os3 wrote
[deleted] t1_j7j3nkn wrote
[deleted] t1_j7j3cto wrote
Reply to comment by bortlip in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
[deleted]
[deleted] t1_j7j37nu wrote
Reply to comment by yeluapyeroc in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
[deleted]
thedarklord176 OP t1_j7j35wu wrote
Reply to comment by The-Last-Lion-Turtle in Wouldn’t it be a good idea to bring a more energy efficient language into the ML world to reduce the insane costs a bit?[D] by thedarklord176
But isn’t everything in Python from C? By that logic I’d think that would make no difference because it’s still Python. Not saying you’re wrong, I don’t work in AI I’m just curious
[deleted] t1_j7j2gb9 wrote
Reply to comment by currentscurrents in [N] Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement by Wiskkey
[deleted]
chiaboy t1_j7j2bwp wrote
Reply to comment by jlaw54 in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
I agree.
They weren’t shocked per se, however clearly OAI is on their radar.
Not entirely unlike during COVID when Xoom taught most Americans about web conferencing. Arguably good for the entire space, but the company in the public imagination probably didn’t deserve all the accolades.
So the question for Google and other responsible AI companies, is how to capitalize on the consumer awareness/adoption, but do it in a way that acknowledges the real constraints (that OAI are less concerned with). MSFT is all ready running into some of those constraints viz the partnership (interesting to see Sataya get over his skis a little. That’s not his usual MO).
The-Last-Lion-Turtle t1_j7j1qxl wrote
Reply to Wouldn’t it be a good idea to bring a more energy efficient language into the ML world to reduce the insane costs a bit?[D] by thedarklord176
Pytorch is written in C++ and CUDA.
Python is really just an interface, with a minimal contribution to the execution time.
jlaw54 t1_j7j1k33 wrote
Reply to comment by chiaboy in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
I agree with threads of what you are saying here.
That said, I think they were “prepared” for this in a very theoretical and abstract sense. I don’t think they were running around like fools at google hq aimlessly.
But that doesn’t mean it didn’t inherently create a shock to their system in real terms. Both can have some truth. Humans trend towards black and white absolutes, when the ground truth is most often grey.
joexner t1_j7j17v2 wrote
Reply to comment by ginger_beer_m in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
Like this one
CKtalon t1_j7j14ab wrote
Reply to Wouldn’t it be a good idea to bring a more energy efficient language into the ML world to reduce the insane costs a bit?[D] by thedarklord176
Most inference/mlops solutions don’t really use Python despite being used to develop the model.
Stuff like Nvidia’s Triton inference server is used for speed up.
[deleted] t1_j7j0wfo wrote
Reply to comment by currentscurrents in [N] Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement by Wiskkey
[removed]
ThrillHouseofMirth t1_j7j0s9c wrote
Reply to [N] Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement by Wiskkey
The assertion that they're a "competing business" is going to be very hard to convince a judge of.
PredictorX1 t1_j7izzeg wrote
Reply to Wouldn’t it be a good idea to bring a more energy efficient language into the ML world to reduce the insane costs a bit?[D] by thedarklord176
Just to be clear, deep learning is the energy consumer. "Shallow" machine learning (logistic regression, multilayer perceptron, tree induction, etc.) and related technologies cost pennies to fit.
clueless1245 t1_j7iz62v wrote
Reply to comment by currentscurrents in [N] Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement by Wiskkey
Got it, thanks! Yeah, I guess it's more complicated than I thought.
currentscurrents t1_j7iy068 wrote
Reply to comment by [deleted] in [N] Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement by Wiskkey
The exception Google Images got is pretty narrow and only applies to their role as a search engine. Fair use is complex, depends on a lot of case law, and involves balancing several factors.
One of the factors is "whether your use deprives the copyright owner of income or undermines a new or potential market for the copyrighted work." Google Image thumbnails clearly don't compete with the original work, but generative AI arguably does - the fact that it could automate art production is one of the coolest things about it.
That said, this is only one of several factors, so it's not a slam dunk for Getty either. The most important factor is how much you borrow from the original work. AI image generators borrow only abstract concepts like style, while Google was reproducing thumbnails of entire works.
Anybody who thinks they know how the courts will rule on this is lying to themselves.
Red-Portal t1_j7ixd0h wrote
Reply to Does the high dimensionality of AI systems that model the real world tell us something about the abstract space of ideas? [D] by Frumpagumpus
High dimensionality does not necessarily mean more complex. In fact, it has been known for quite a while that going to higher dimensions makes various problems easier; non-linearly separable datasets suddenly become separable in higher dimensions for example. Turning this to 11, you basically get kernel machines. Kernels embed the data into potentially infinite dimensional spaces, and that has been very successful before deep learning took over.
WokeAssBaller t1_j7j6u6f wrote
Reply to comment by mugbrushteeth in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
I think Google wins this race in the end, seeing ChatGPT be plugged into crappy Microsoft products tells me where it is heading