Recent comments in /f/MachineLearning

chaitjo OP t1_j70k5tr wrote

In a sense, yes indeed!

For those who are curious, check out this blogpost from me: Transformers are Graph Neural Networks - https://thegradient.pub/transformers-are-graph-neural-networks/

It explores the connection between Transformer models such as GPTs and other LLMs for Natural Language Processing, and Graph Neural Networks. It is now one of the top-3 most read articles on The Gradient and features in coursework at Cambridge, Stanford, etc.

4

CKtalon t1_j70ht51 wrote

Basically job scopes will change due to the boost in efficiency.

The mediocre of any field will potentially be kicked out or priced out by AI.

More domain experts will be needed to vet the AI output and guide the improvement of AI (using RLHF) for probably decades to come. Generalists will likely be replaced by AI with time.

−4

FedRCivP11 t1_j709ds5 wrote

Wouldn’t the sorts of signals our own planet emits be a good dataset to train to recognize the sorts of signals a civilization might generate? I’d assumed from the article this is what they’d done. Seems to me the key is whether we can discern, not necessarily interpret, communications, perhaps encrypted, from cosmic noise and natural phenomena, right? So train a model to recognize any human signals from noise. You’d look in those bands that we emit that are likely to make the journey to our neighbors.

To make the data more useful, you could simulate phase shifting in the datasets of our own EM communications. Perhaps you’d want to simulate other phenomena that is likely to modify celestial signals from a neighbor civilization.

1