Recent comments in /f/MachineLearning

currentscurrents t1_j9rw3uy wrote

That's like saying we're wrong about out aerodynamics and how birds fly, because Aristotle was wrong about it and we'll understand flight very differently in 2000 years.

These articles don't represent the mainstream neuroscience position. It pretty clearly does use electrical impulses. You can stick in an electrode array and read them directly, or you can stick someone in an fMRI and see the electrical patterns. It also pretty clearly uses chemical signalling, which you can alter with drugs. We've seen no structures that appear to perform quantum computation.

8

Imnimo t1_j9rvl16 wrote

No, a lot of his arguments strike me as similar to arguments from the 1800s about how some social trend or another spells doom in a generation or two. And then his followers spend their time confusing "Bing was mean to me" with "Bing is misaligned" (as opposed to "Bing is bad at its job") and start shouting "See? See? Alignment is impossible and it's already biting us!"

14

wind_dude t1_j9rv2vw wrote

Would you admit a theory that may not be possible and than devote your life to working on it? Even if you don't you're going to say it, and eventually believe it. And the definitions do keep moving with lower bars as the media and companies sensationalise for clicks and funding.

−3

DigThatData t1_j9rux16 wrote

I think the whole "paperclip" metaphor descibres problems that are already here. a lot of "alignment" discussion feels to me like passengers on a ship theorizing what would happen if the ship became sentient and turned evil and decided to crash into the rocks, but all the while the ship has already crashed into the rocks and is taking on water. It doesn't matter if the ship turns evil in the future: it's already taking us down, whether it crashed into the rocks on purpose or not. See also: contribution of social media recommendation systems to self-destructive human behaviors including political radicalization, stochastic terrorism, xenophobia, fascism, and secessionism. Oh yeah, also we're arguing over the safety of vaccines during an epidemic and still ignoring global warming, but for some reason public health and environmental hazards don't count as "x-risks".

5

wind_dude t1_j9ru6yc wrote

https://plato.stanford.edu/entries/qt-consciousness/

https://www.nature.com/articles/s41566-021-00845-4

https://www.nature.com/articles/440611a

https://phys.org/news/2022-10-brains-quantum.html

https://www.newscientist.com/article/mg22830500-300-is-quantum-physics-behind-your-brains-ability-to-think/

​

Considering in 355 BC Aristotle thought the brain was a radiator, it's not a far leap to think were wrong that it uses electrical impulses like a computer. And I'm sure after quantum mechanics there will be something else. Although we have far more understanding than 2000 years ago, we are very far from the understanding we will have in 2000 years.

−5

currentscurrents t1_j9rt3wq wrote

Quantum neural networks are an interesting idea, but our brain is certainly not sitting in a vat of liquid nitrogen, so intelligence must be possible without it.

The brain was created by an optimization process (evolution) - it's no coincidence that the entire field of machine learning is about the study of optimization processes too. It must be possible for intelligence to arise through optimization; and it does seem to be working better than anything else so far.

8

arg_max t1_j9rt2ew wrote

The thing is that the theory behind diffusion models is at least 40-50 years old. Forward diffusion is a discretization of a stochastic differential equations that transforms the data distribution into a normal distribution. People figured out that it is possible to reverse this process, so to go from the normal distribution back to the data distribution using another sde In the 1970s. The thing is that this reverse SDE contains the score function, so the gradient of the log density of the data and people just didn't really know how to get that from data. Then some smart guys came along, found the ideas about denoising score matching from the 2000s and did the necessary engineering to make it work with deep nets.

The point I am making is that this problem was theoretically well understood a long time ago, it just took humanity lots of years to actually be able to compute it. But for AGI, we don't have such a recipe. There's not one equation hidden in some old math book that will suddenly get us AGI. Reinforcement learning really is the only approach I could think of but even there I just don't see how we would get there with the algorithms we are currently using.

7

VirtualHat t1_j9rsysw wrote

I work in AI research, and I see many of the points EY makes here in section A as valid reasons for concern. They are not 'valid' in the sense that they must be true, but valid in that they are plausible.

For example, he says, We can't just build a very weak system. There are two papers that led me to believe this could be the case. All Else Being Equal be Empowered, which shows that any agent acting to achieve a goal under uncertainty will need (all else being equal) to maximize its control over the system. And the Zero Shot Learners paper which shows that (very large) models trained on one task seem also to learn other tasks (or at least learn how to learn them). Both of these papers make me question the assumption that a model trained to learn one 'weak' task won't also learn more general capabilities.

Where I think I disagree is on the likely scale of the consequences. "We're all going to die" is an unlikely outcome. Most likely the upheaval caused by AGI will be similar to previous upheavals in scale, and I'm yet to see a strong argument that bad outcomes will be unrecoverable.

59

royalemate357 t1_j9rsqd3 wrote

>We are not even remotely close to anything like actual brain functions. Intelligence need not look anything remotely close to actual brain functions though, right? Like a plane's wings don't function anything like a bird's wings, yet it can still fly. In the same sense, why must intelligence not be algorithmic?

At any rate I feel like saying that probabilistic machine learning approaches like GPT3 are nowhere near intelligence is a bit of a stretch. If you continue scaling up these approaches, you get closer and closer to the entropy of natural language/whatever other domain, and if youve learned the exact distribution of language, imo that would be "understanding"

2

Langdon_St_Ives t1_j9rsn1f wrote

Or maybe downvotes because they’re stating the obvious. I didn’t downvote for that or any other reason. Just stating it as another possibility. I haven’t seen anyone here claim language models are actual AI, let alone AGI.

3

VirtualHat t1_j9rqmii wrote

This is very far from the current thinking in AI research circles. Everyone I know believes intelligence is substrate independent and, therefore, could be implemented in silicon. The debate is really more about what constitutes AGI and if we're 10 years or 100 years away, not if it can be done at all.

8

gettheflyoffmycock t1_j9rqd5w wrote

Lol, downvotes. this subreddit has been completely overran by non engineers. I guarantee no one here has ever custom trained and inferred with a model outside of API calls. Crazy. Since ChatGPT, open enrollment ML communities are so cringe

2

Additional-Escape498 t1_j9rq3h0 wrote

EY tends to go straight to superintelligent AI robots making you their slave. I worry about problems that’ll happen a lot sooner than that. What happens when we have semi-autonomous infantry drones? How much more aggressive will US/Chinese foreign policy get when China can invade Taiwan with Big Dog robots with machine guns attached? What about when ChatGPT has combined with toolformer and can write to the internet instead of just read and can start doxxing you when it throws a temper tantrum? What about when rich people can use something like that to flood social media with bots that spew disinformation about a political candidate they don’t like?

But part of the lack of concern for AGI among ML researchers is that during the last AI winter we rebranded to machine learning because AI was such a dirty word. I remember as recently as 2015 at ICLR/ICML/NIPS you’d get side-eye for even bringing up AGI.

193