Recent comments in /f/MachineLearning

trnka t1_j91sshb wrote

It doesn't look like it's headed that way, no. The set of possible next sentences is just too big to iterate over or to compute a softmax over, so it's broken down into words. In fact, the set of possible words is often too big so it's broken down into subwords with methods like byte pair encoding and WordPiece.

The key when dealing with predicting one word or subword at a time is to model long-range dependencies well enough so that the LM can generate coherent sentences and paragraphs.

1

DamnYouRichardParker t1_j91sjdl wrote

Calling out low quality posts and people asking stupid questions... In a low quality/value post that is only critical of others but not giving any constructive suggestions or ideas on how to make things better.

This kind of post only adds to the low quality of the content...

Good and productive communities don't see newbies as a problem. They embrace them and share their field of interest and help make it grow and be better.

Your attitude is the exact opposite. If you want to segregate people based of your own biased perception of what is acceptable or not will only hurt the community and prevent it's wider adoption and better contribituonsIf you try and limit it's reah and inclusion of others.

9

DamnYouRichardParker t1_j91rvmq wrote

Reply to comment by Deep-Station-1746 in [D] Please stop by [deleted]

Yeah we see this happen from time to time. People promote their field of interest. More and more people join in and after a while a it reaches a more main stream level of popularity and then the "og" purists of the subject get frustrated cause "it's not the same anymore and people are degrading my passion...

1

Borrowedshorts t1_j91rowo wrote

Reply to comment by Deep-Station-1746 in [D] Please stop by [deleted]

Let's not act like 2 million people signed up for this sub as anything other than machine learning being a buzzword. Pretty much every other sub dedicated to academic discourse has far fewer subscribers.

24

suflaj t1_j91rcef wrote

Reply to comment by [deleted] in [D] Please stop by [deleted]

I mean, the solution to this is already being used, if you want to stop the flood of similar threads, then you just create a pinned megathread.

2

KPTN25 t1_j91q5hn wrote

Reply to comment by Optimal-Asshole in [D] Please stop by [deleted]

Yeah, that quote is completely irrelevant.

The bottom line is that LLMs are technically and completely incapable of producing sentience, regardless of 'intent'. Anyone claiming otherwise is fundamentally misunderstanding the models involved.

4

loga_rhythmic t1_j91ptib wrote

Why would no one try and design an AI that is self aware? That's literally the exact thing (or at least the illusion of it) that many AI researchers are trying to achieve. Just listen to interviews with guys like Sutskever, Schmidhuber, Karpathy, Sutton, etc.

36

gwern t1_j91ozq3 wrote

Reply to comment by Optimal-Asshole in [D] Please stop by [deleted]

> Some people would do it on purpose, and it can happen by accident.

Forget 'can', it would happen by accident if it ever does. I mean like bro, we can't even 'design an AI' which learns the 'tl;dr:' summarization prompt, that just happens when you train a Transformer on Reddit comments and we discover that afterwards investigating what GPT-2 can do, you think we'd be designing 'consciousness'?

15

HINDBRAIN t1_j91o4fh wrote

>I wonder what the mods are doing

I'm seeing some of them disappear after 1 hour or so, so deleting the posts probably?

4

daking999 t1_j91mtko wrote

Good to hear, thanks. That's when I'm teaching next so would be great. It's a fantastic resource for teaching ML but frustrating when students hit the GPU cap. My university also won't let the students pay for Colab Pro on their .edu google account themselves, some legal nonsense. Some of them end up paying on their personal google accounts but then it's awkward needing to share the notebooks again (and I feel bad about the students paying when it should really be the school).

1

Tribalinstinct t1_j91lz2u wrote

Reply to comment by goolulusaurs in [D] Please stop by [deleted]

Sentience is the ability to sense and experience the world, do you really need a study on a algorithm that predicts what words it should combine to create believable sentences to understand how it's not sentient? Let alone self aware or intelligent? It has no sensors to interact with the wider world or perceive it, no further computation of actually processing the information or learning from it. It just scrapes and parses data then stitches it together in a way that makes it read as human like....

Cite me a study that you have a brain, would be nice to have one, but it's not information that is needed by a person who understands the simplest of biology and thus is able to know that there is in fact a brain there.

1

csreid t1_j91llzp wrote

Reply to comment by Optimal-Asshole in [D] Please stop by [deleted]

>Be the change you want to see in the subreddit.

The change I want to see is just enforcing the rules about beginner questions. I can't do that bc I'm not a mod.

40

lemurlemur t1_j91l8r9 wrote

> Advertising low quality blogposts and services, etc, and asking stupid questions.

This isn't a terribly helpful or constructive way of improving this subreddit.

It is reasonable to criticize the quality of posts (constructively), but for example asking people to stop asking "stupid questions" is not helpful and has a chilling effect on discussions. Newbs and even experienced ML people will sit on their hands when they might actually have something to contribute.

15

gunshoes t1_j91kmyu wrote

Imo it's about the same. ChatGPT is just replacing the daily "do I need to know math, plz say no" post.

102

TrainquilOasis1423 t1_j91k8px wrote

Is the next step in LLMs to predict the entire next sentence?

From what I understand LLMs mostly just predict the next word in a sentence. With just this we have seen HUGE advancement and emergent behavior out of what could essentially be called level 1 of this tech. So then would making a machine learning architecture to predict the entire next sentence be the next logical step? After that would it be entire paragraphs? What would be the challenges of making such an architecture?

1

waiting4omscs t1_j91jzc5 wrote

Reply to comment by quichemiata in [D] Please stop by [deleted]

So-called "Stupid Questions" could maybe get closed and hidden by a bot and recommended to be repost in the "Simple Questions" thread to keep the subreddit content high quality?

6