Recent comments in /f/MachineLearning

morecoffeemore t1_j595omy wrote

Dumb question, but how do I know chatgpt is not just copy/pasting from the web?

Tried chatgpt for the first time. Seems cool. Dumb, question, but how do I know it's not just copy/pasting something a person wrote on the web?

I ask it for a recommendation for speakers. Gives a good reply. It seems to me it could've just done a web search and then copied what someone wrote on the web as a reply.

Is there a way to test/use chatgpt to prove to myself that it's not just copying and pasting from the web?

1

deviantkindle t1_j58zhpx wrote

Thanks for that link! It was a great read above all else.

Their description is actually what I was thinking of doing with my graph-connecting project, but I had never heard of "graph neural networks" before. Looks like a cool rabbit hole.

So much for doing taxes this weekend...

1

xquizitdecorum t1_j58xf15 wrote

Graph embeddings are mentioned below, but also explore graph convolutional neural networks and message-passing neural networks. These methods are extensions of traditional CNN's into graph structures - after all, isn't an image just a lattice graph with pixels as nodes? These models can be used for, as also mentioned below, node and edge prediction/completion, but they can also be used for entire graph-based prediction. I've worked on graph-based prediction for molecular modeling, where I do whole-graph classification.

1

yaosio t1_j58dycj wrote

Microsoft added AI upscaling to Xbox cloud streaming on Edge and it works really well. At least I think it's AI upscaling, it could be something like FSR. Either way it looks really good. If Microsoft can do it for lag sensitive gaming then Google can do it for regular videos.

1

dancingnightly t1_j58anv8 wrote

That's a great resource, thanks. I have studied how this kind of autoregressive model works and found attention fascinating, but here it's graph embedding entities you brought up that sound exciting. I have just skim read your paper for now, so perhaps I made a mistake, but what I mean is:

For graph embeddings, could you dynamically capture different entities/tokens up to a much broader context than for common sense reasoning statements and questions? i.e. do entailment on a whole chapter(or knowledge base entry with 50 triplets), where the graph embeddings meaningfully represent many entities (perhaps with Sine positional embeddings for each additional text entry mention in addition to the graph, just like for attention)?

[Why I'm interested: because I presume it's impractical to scale this approach up in context - similar to for autoregressive models - due to the graph scaling exponentially if fully connected, but I'd love to know your thoughts - can a graph be strategically connected etc]

1

dancingnightly t1_j583tfa wrote

>we first generate a graph that can capture relationship between entities in the question

This is really impressive, what's your thoughts on the state of this kind of approach? Could it be extended from sentences to whole context paragraphs at some stage, with the entities dynamically being different graph items?

1