Recent comments in /f/MachineLearning
wind_dude t1_j9rwd41 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
No, absolutely not. It's fear mongering about something we aren't even remotely close to achieving.
maxToTheJ t1_j9rwaum wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I worry about a lot of bad AI/ML made by interns making decisions that have huge impact like in the justice system, real estate ect.
currentscurrents t1_j9rw3uy wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
That's like saying we're wrong about out aerodynamics and how birds fly, because Aristotle was wrong about it and we'll understand flight very differently in 2000 years.
These articles don't represent the mainstream neuroscience position. It pretty clearly does use electrical impulses. You can stick in an electrode array and read them directly, or you can stick someone in an fMRI and see the electrical patterns. It also pretty clearly uses chemical signalling, which you can alter with drugs. We've seen no structures that appear to perform quantum computation.
[deleted] t1_j9rvxiq wrote
wind_dude t1_j9rvmbb wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
When they scale they hallucinate more, produce more wrong information, thus arguably getting further from intelligence.
Imnimo t1_j9rvl16 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
No, a lot of his arguments strike me as similar to arguments from the 1800s about how some social trend or another spells doom in a generation or two. And then his followers spend their time confusing "Bing was mean to me" with "Bing is misaligned" (as opposed to "Bing is bad at its job") and start shouting "See? See? Alignment is impossible and it's already biting us!"
[deleted] t1_j9rvjkr wrote
wind_dude t1_j9rvfo5 wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
That's just how tools are used, has been since the dawn of time. You just want to be on the side with the largest club, warmest fire, etc.
wind_dude t1_j9rv2vw wrote
Reply to comment by VirtualHat in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Would you admit a theory that may not be possible and than devote your life to working on it? Even if you don't you're going to say it, and eventually believe it. And the definitions do keep moving with lower bars as the media and companies sensationalise for clicks and funding.
DigThatData t1_j9rux16 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I think the whole "paperclip" metaphor descibres problems that are already here. a lot of "alignment" discussion feels to me like passengers on a ship theorizing what would happen if the ship became sentient and turned evil and decided to crash into the rocks, but all the while the ship has already crashed into the rocks and is taking on water. It doesn't matter if the ship turns evil in the future: it's already taking us down, whether it crashed into the rocks on purpose or not. See also: contribution of social media recommendation systems to self-destructive human behaviors including political radicalization, stochastic terrorism, xenophobia, fascism, and secessionism. Oh yeah, also we're arguing over the safety of vaccines during an epidemic and still ignoring global warming, but for some reason public health and environmental hazards don't count as "x-risks".
[deleted] t1_j9ruj0m wrote
Reply to comment by arg_max in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
[deleted]
wind_dude t1_j9ru6yc wrote
Reply to comment by currentscurrents in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
https://plato.stanford.edu/entries/qt-consciousness/
https://www.nature.com/articles/s41566-021-00845-4
https://www.nature.com/articles/440611a
https://phys.org/news/2022-10-brains-quantum.html
​
Considering in 355 BC Aristotle thought the brain was a radiator, it's not a far leap to think were wrong that it uses electrical impulses like a computer. And I'm sure after quantum mechanics there will be something else. Although we have far more understanding than 2000 years ago, we are very far from the understanding we will have in 2000 years.
currentscurrents t1_j9rt3wq wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Quantum neural networks are an interesting idea, but our brain is certainly not sitting in a vat of liquid nitrogen, so intelligence must be possible without it.
The brain was created by an optimization process (evolution) - it's no coincidence that the entire field of machine learning is about the study of optimization processes too. It must be possible for intelligence to arise through optimization; and it does seem to be working better than anything else so far.
arg_max t1_j9rt2ew wrote
Reply to comment by Small-Fall-6500 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
The thing is that the theory behind diffusion models is at least 40-50 years old. Forward diffusion is a discretization of a stochastic differential equations that transforms the data distribution into a normal distribution. People figured out that it is possible to reverse this process, so to go from the normal distribution back to the data distribution using another sde In the 1970s. The thing is that this reverse SDE contains the score function, so the gradient of the log density of the data and people just didn't really know how to get that from data. Then some smart guys came along, found the ideas about denoising score matching from the 2000s and did the necessary engineering to make it work with deep nets.
The point I am making is that this problem was theoretically well understood a long time ago, it just took humanity lots of years to actually be able to compute it. But for AGI, we don't have such a recipe. There's not one equation hidden in some old math book that will suddenly get us AGI. Reinforcement learning really is the only approach I could think of but even there I just don't see how we would get there with the algorithms we are currently using.
VirtualHat t1_j9rsysw wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I work in AI research, and I see many of the points EY makes here in section A as valid reasons for concern. They are not 'valid' in the sense that they must be true, but valid in that they are plausible.
For example, he says, We can't just build a very weak system. There are two papers that led me to believe this could be the case. All Else Being Equal be Empowered, which shows that any agent acting to achieve a goal under uncertainty will need (all else being equal) to maximize its control over the system. And the Zero Shot Learners paper which shows that (very large) models trained on one task seem also to learn other tasks (or at least learn how to learn them). Both of these papers make me question the assumption that a model trained to learn one 'weak' task won't also learn more general capabilities.
Where I think I disagree is on the likely scale of the consequences. "We're all going to die" is an unlikely outcome. Most likely the upheaval caused by AGI will be similar to previous upheavals in scale, and I'm yet to see a strong argument that bad outcomes will be unrecoverable.
royalemate357 t1_j9rsqd3 wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>We are not even remotely close to anything like actual brain functions. Intelligence need not look anything remotely close to actual brain functions though, right? Like a plane's wings don't function anything like a bird's wings, yet it can still fly. In the same sense, why must intelligence not be algorithmic?
At any rate I feel like saying that probabilistic machine learning approaches like GPT3 are nowhere near intelligence is a bit of a stretch. If you continue scaling up these approaches, you get closer and closer to the entropy of natural language/whatever other domain, and if youve learned the exact distribution of language, imo that would be "understanding"
Langdon_St_Ives t1_j9rsn1f wrote
Reply to comment by gettheflyoffmycock in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Or maybe downvotes because they’re stating the obvious. I didn’t downvote for that or any other reason. Just stating it as another possibility. I haven’t seen anyone here claim language models are actual AI, let alone AGI.
memberjan6 t1_j9rsdvk wrote
Reply to [P] What are the latest "out of the box solutions" for deploying the very large LLMs as API endpoints? by johnhopiler
Cohere, deepset, ....
dpineo t1_j9rrdb2 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
It's hard to worry about the "Terminator" dystopia when the "Elysium" dystopia is so much more imminent.
VirtualHat t1_j9rqmii wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
This is very far from the current thinking in AI research circles. Everyone I know believes intelligence is substrate independent and, therefore, could be implemented in silicon. The debate is really more about what constitutes AGI and if we're 10 years or 100 years away, not if it can be done at all.
SchmidhuberDidIt OP t1_j9rqdje wrote
Reply to comment by Tonkotsu787 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Thanks, I actually read this today. He and Richard Ngo are the names I've come across for researchers who've deeply thought about alignment and hold views grounded in the literature.
gettheflyoffmycock t1_j9rqd5w wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Lol, downvotes. this subreddit has been completely overran by non engineers. I guarantee no one here has ever custom trained and inferred with a model outside of API calls. Crazy. Since ChatGPT, open enrollment ML communities are so cringe
Additional-Escape498 t1_j9rq3h0 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
EY tends to go straight to superintelligent AI robots making you their slave. I worry about problems that’ll happen a lot sooner than that. What happens when we have semi-autonomous infantry drones? How much more aggressive will US/Chinese foreign policy get when China can invade Taiwan with Big Dog robots with machine guns attached? What about when ChatGPT has combined with toolformer and can write to the internet instead of just read and can start doxxing you when it throws a temper tantrum? What about when rich people can use something like that to flood social media with bots that spew disinformation about a political candidate they don’t like?
But part of the lack of concern for AGI among ML researchers is that during the last AI winter we rebranded to machine learning because AI was such a dirty word. I remember as recently as 2015 at ICLR/ICML/NIPS you’d get side-eye for even bringing up AGI.
SchmidhuberDidIt OP t1_j9rwh3i wrote
Reply to comment by arg_max in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
What about current architectures makes you think they won’t continue to improve with scale and multimodality, provided a good way of tokenizing? Is it the context length? What about models like S4/RWKV?