Recent comments in /f/MachineLearning
[deleted] t1_j9s5hf7 wrote
icedrift t1_j9s5640 wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
What freaks me out the most are the social ramifications of AIs that pass as humans to the majority of people. We're still figuring out how to healthily interact with social media and soon we're going to be interacting with entirely artificial content that we're gonna anthropomorphize onto other humans. In the US we're dealing with a crisis of trust and authenticity, I can't imagine generative text models are going to help with that.
[deleted] t1_j9s4j6o wrote
CactusOnFire t1_j9s4b0f wrote
Reply to comment by shoegraze in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>We can say goodbye soon to a usable internet because power seeking people with startup founder envy are going to just keep ramping these things up.
This might seem like paranoia on my part, but I am worried about AI being leveraged to "drive engagement" by stalking and harassing people.
"But why would anyone consider this a salable business model?" It's no secret that the entertainment on a lot of different websites are fueled by drama. If someone were so inclined, AI and user metrics could be harvested specifically to find creative and new ways to sow social distrust, and in turn drive "drama" related content.
I.E. creating recommendation engines specifically to show people things that they assume will make them angry, specifically so they engage with it in great detail, so that a larger corpus of words will be exchanged that can be datamined for advertising analytics.
ReasonablyBadass t1_j9s3yq1 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I think the basic issue of AI alignment isn't AI. It's trying to figure out what our values are supposed to be and who gets to decide that.
royalemate357 t1_j9s2pf3 wrote
Reply to comment by DigThatData in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
hmm i didn't realize that the origin of the paperclip maximizer analogy, but it seems like you're right that some human had to tell it to make paperclips in the first place.
mc-powzinho t1_j9s2fhy wrote
Reply to [D] Simple Questions Thread by AutoModerator
I’m trying to pd.read_csv some large data files and put them into data frames for a nER project. But the kernel keeps crashing. This is in a VSCode Jupiter notebook by the way. Please let me know what i can do instead, thanks.
DigThatData t1_j9s23ds wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> Isn't there a difference between the two, because the latter concerns a human trying to pursue a certain goal (maximize user engagement), and giving the AI that goal.
in the paperclip maximization parable, "maximize paperclips" is a directive assigned to an AGI owned by a paperclip manufacturer, which consequently concludes that things like "destabilize currency to make paperclip materials cheaper" and "convert resources necessary for human life to exist into paperclip factories" are good ideas. so no, maximizing engagement at the cost of the stability of human civilization is not "aligned" in exactly the same way maximizing paperclip production isn't aligned.
shoegraze t1_j9s22kq wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yep if we die from AI it will be from bioterrorism well before we get enslaved by a robot army. And the bioterrorism stuff could even happen before “AGI” rears its head.
shoegraze t1_j9s1hd0 wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
What I’m hoping is that EY’s long term vision for AI existential risk is thwarted by the inevitable near term issues that will come to light and inevitably be raised to major governments and powerful actors who will then enter a “collective action” type of response similar to what happened with nukes, etc. the difference is that any old 15 year old can’t just buy a bunch of AWS credits and start training a misaligned nuke.
What you mention about a ChatGPT like system getting plugged into the internet is exactly what Adept AI is working on. It makes me want to bang my head against the wall. We can say goodbye soon to a usable internet because power seeking people with startup founder envy are going to just keep ramping these things up.
In general though, I think my “timelines” are longer than EY / EA by a bit for a doomsday scenario. LLMs are just not going to be the paradigm that brings “AGI,” but they’ll still do a lot of damage in the meantime. Yann had a good paper about what other factors we might need to get to a dangerous, agentic AI.
wind_dude t1_j9s1cr4 wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I'll see if I can find the benchmarks, I believe there are a few papers from IBM and deepmind talking about it. And a benchmark study in relation to flan.
ai_hero t1_j9s1790 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I'm sorry. There's no way I'm going to a listen to a dude with eyebrows like that.
royalemate357 t1_j9s125d wrote
Reply to comment by DigThatData in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> instead of "maximizing paperclips," "it" is just trying to maximize engagement and click-through rate. and just like the paperclips thing, "it" is burning the world down trying to maximize the only metrics it cares about
Isn't there a difference between the two, because the latter concerns a human trying to pursue a certain goal (maximize user engagement), and giving the AI that goal. and so arguably, the latter is "aligned" (for some sense of the word) to the human that's using it to maximize their engagement, in that its doing what a specific human intends it to do. Whereas the paperclip scenario is more like, human tells AI to maximize engagement, yet the AI has a different goal and chooses to pursue that instead.
adventurousprogram4 t1_j9s10z7 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
EY is a total clown who inserts enough truth into his (incredibly lengthy) arguments that an air of correctness and solid reasoning permeate from it, but most of his claims simply reduce to p(everyone dies | literally anyone but EY charts the course) ~= 1. I am not exaggerating, he got angry publicly that others had not thought of everything he'd thought of before him when it was so obviously correct.
MinaKovacs t1_j9s04eh wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
It's just matrix multiplication and derivatives. The only real advance in machine learning over the last 20yrs is scale. Nvida was very clever and made a math processor that can do matrix multiplication 100x faster than general purpose CPUs. As a result, the $1bil data center, required to make something like GPT-3, now only costs $100mil. It's still just a text bot.
synaesthesisx t1_j9rzvzj wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
No - his arguments and fears of mis-alignment are far overblown.
DigThatData t1_j9rzrzd wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
if a "sufficiently advanced AI" could achieve "its own goals" that included "humanity going extinct" (at least as a side effect) in such a fashion that humanity did the work of putting itself out of extinction on its own needing only the AGIs encouragement, it would. In other words, the issues I described are indistinguishable from the kinds of bedlam we could reasonably expect an "x-risk AGI" to impose upon us. ipso facto, if part of the alignment discussion is avoiding defining precisely what "AGI" even means and focusing only on potential risk scenarios, the situation we are currently in is one in which it is unclear that a hazardous-to-human-existence AGI doesn't already exist and is already driving us towards our own extinction.
instead of "maximizing paperclips," "it" is just trying to maximize engagement and click-through rate. and just like the paperclips thing, "it" is burning the world down trying to maximize the only metrics it cares about. "it" just isn't a specific agent, it's a broader system that includes a variety of interacting algorithms and platforms forming a kind of ecosystem of meta-organisms. but the nature of the ecosystem doesn't matter for the paperclip maximization parable to apply.
ArnoF7 t1_j9rzhcc wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I must say I am not very involved with the alignment community and do not have much exposure to their discussions, so I may miss some ideas, but as a researcher in robotics I am not super worried about some of his concerns just by reading his post.
Currently there is no clear roadmap in the robotics community to achieve an agent that can autonomously and robustly interact with the unstructured physical world, even just for a relatively specialized environment. Robotics is still very far away from its ChatGPT moment, and I think current socioeconomic conditions are rather adversarial to robotics RD compared to other domain. So such agent will have very limited physical agency.
If you assume current auto-regressive LLMs can somehow lead to a super-intelligent agent and just figure out the robotics/physical interaction problem itself, then sure you could worry about it. But if we assume an omnipotent oracle then we could worry about anything. It’s not so much different from worrying about a scenario in which the law of physics just changes the next instant and all biological creatures will just explode under the new law of physics. I mean it’s possible, just not falsifiable so I wouldn’t worry too much about it.
Btw, I want to stress that I think most of EY’s chain of thoughts that I have the chance to read about are logical. But his assumptions are usually so powerful. When you have such powerful assumptions a lot of things become possible.
Also, I wouldn’t dismiss alignment research in general like many ML researchers do, precisely because I work with physical robots. There are many moments during my experiments I would think to myself “this robot system can be a very efficient killing machine if people really try” or “this system can make many people lose their jobs if it can economically scale”. So yeah in general I think some “alignment” research has its merits. Maybe we should start by addressing some problems that already happened or are very imminent
royalemate357 t1_j9rzbbc wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>When they scale they hallucinate more, produce more wrong information
Any papers/literature on this? AFAIK they do better and better on fact/trivia benchmarks and whatnot as you scale them up. It's not like smaller (GPT-like) language models are factually more correct ...
royalemate357 t1_j9ryzg7 wrote
Reply to comment by DigThatData in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>I think the whole "paperclip" metaphor descibres problems that are already here
Does it? My understanding of the paperclip metaphor is that an advanced AI will pursue its own goals that are totally unrelated to human goals, e.g. creating as many paperclips as possible. But AIs aren't advanced enough right now to be at this point.
As for what constitutes "x-risks", AFAIK it means "existential risk" which is like all of humanity going extinct. IMO the reason why people consider advanced AGIs an x-risk, and the others are not, is because the other problems you mentioned don't result in the extinction of *every* single human on Earth
Skylion007 t1_j9rxyp0 wrote
currentscurrents t1_j9rxyne wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Most of these links are highly philosophical and none of them address the question of how the brain would usefully retain qubit stability at body temperature.
The evidence they present is very weak or non-existent, and the newscientist article acknowledges this is not the mainstream neuroscience position.
Meanwhile there is heaps of evidence that electrical and chemical signaling is involved; fiddling with either of them directly affects your conscious experience.
wind_dude t1_j9rwt70 wrote
Reply to comment by currentscurrents in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>Quantum neural networks are an interesting idea, but our brain is certainly not sitting in a vat of liquid nitrogen, so intelligence must be possible without it.
look at the links I shared above.
​
Recreating actual intelligence, what the definition of AGI was 6 months ago, will not be possible on logic based computers. I have never said it's not possible. There's a number of reasons it is not currently possible, the number 1 that we don't have a full understanding of intelligence, and recent theories suggest it's not logic based like previously theorised, but quantum based.
Look at the early history of attempting to fly, for centuries humans strapped wings to their arms and attempted to fly like birds.
mano-vijnana t1_j9s5zl4 wrote
Reply to comment by SchmidhuberDidIt in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Both of them are more positive than EY, but both are still quite worried about AI risk. It's just that they don't see doom as inevitable. This is the sort of scenario Christiano worries about: https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like
And this is Ngo's overview of the topic: https://arxiv.org/abs/2209.00626