Recent comments in /f/MachineLearning
[deleted] t1_j9sjrw4 wrote
WarAndGeese t1_j9sj481 wrote
Reply to comment by [deleted] in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I agree about the callousness, and that's without artificial intelligence too. The global power balances were shifting at times of rapid technological development, and that development created control vacuums and conflicts that were resolved by war. If we learn from history we can plan for it and prevent it, but the same types of fundamental underlying shifts are being made now. We can say that international global financial incentives act to prevent worldwide conflict, but that only goes so far. All of the things I'm saying are on the trajectory without neural networks as well, they are just one of the many rapid shifts in political economy and productive efficiency.
In the same way that people were geared up at the start of the Russian invasion to Ukraine to try to prevent nuclear war, we should all be vigilant to try to globally dimilitarize and democratise to prevent any war. The global nuclear threat isn't even over and it's regressing.
sam__izdat t1_j9sj2zl wrote
Reply to comment by impossiblefork in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> I think in the opposite way: if alignment is possible, then alignment is profoundly dangerous.
Exactly. What is this neoliberal fever dream? "But what if the computer doesn't do what they want?!" -- my god, what if it does? Are we living on the same planet? Have you seen what they want?
I love how the core of the panic is basically:
"Oh my god, what if some kind of machine emerged, misaligned with human interests and totally committed to extracting what it wants from the material world, no matter the cost, seeing human life and dignity as an obstruction to its function?!"
Yeah, wow... what if?! That'd be so crazy! Glad we don't have anything like that.
sam__izdat t1_j9si9wx wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Worry less about misalignment of Skynet, the impending singularity and the rise of the robots, which is science fiction, and worry more about misalignment of class interests and misalignment of power, which is our reality.
For the former, it's still mostly an ambitious long-term goal to simulate the world's simplest nematodes. There's hardly any reason to believe anyone's appreciably closer to AGI now than they were in the 1950s. For the latter, though, there are well-founded concerns that automation will be used for surveillance, disinformation, manipulation, class control, digital Taylorism and other horrifying purposes, as the species knowingly accelerates toward extinction by ignoring systemic failures like AGW and nuclear war, which pose actual, imminent and growing existential risks -- risks that will be compounded by giving state and capital tools to put the interests of power and short term ROI above even near-term human survival, let alone human dignity or potential.
"What if this pile of linear algebra does some asimov nonsense" is not a serious concern. The real concern is "what if it does exactly what was intended, and those intentions continue to see omnicide an acceptable side effect."
martianunlimited t1_j9sh43x wrote
Reply to [D] Model size vs task complexity by Fine-Topic-6127
Not exactly what you are asking, but there is this paper on scaling law that states that (assuming that the training data is representative of the distribution) for at least large langauge models, how the performance of transformers scale to the amount of data and compare it to other network architecture.... https://arxiv.org/pdf/2001.08361.pdf we don't have anything similar for other types of data.
[deleted] t1_j9sh2w7 wrote
linearmodality t1_j9segxb wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I don't worry much at all about the AI safety/alignment concerns described by Eliezer Yudkowsky. I don't find his arguments to be particularly rigorous, and his arguments in this space are typically based on premises that are either nonsensical or wrong and that don't engage meaningfully with the current practice in the field. This is not to say that I do not worry about AI safety: Stuart Russell has done good work in this space towards mapping out the AI alignment problem. And if you're looking for arguments that are more rigorous leading to more sound conclusions on AI alignment and which people in the field do seem to respect, I'd recommend you look into Stuart Russell's work. The bulk of opinions I've seen from people in the field on the positions of Yudkowsky and his edifice range from finding the work to be of dubious quality (but tolerable) to judging it as actively harmful.
astrange t1_j9se7ri wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yud is a millenarian street preacher; his concept of evil superintelligent AGI is half religion and half old SF books they read. It has no resemblance to current research and we aren't going in directions similar to what they imagine we're doing.
(There's not even much reason to believe "superintelligence" is possible, that it would be helpful on any given task, or even that humans are generally intelligent.)
astonzhang t1_j9sd3mw wrote
Reply to comment by IluvBsissa in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
The human performance was taken from the paper from Lu et al.
terath t1_j9sd368 wrote
Reply to comment by perspectiveiskey in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
This is already happening but the problem is humans not ai. Even without ai we are descending into an era of misinformation.
astonzhang t1_j9scuwn wrote
Reply to comment by zisyfos in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
We ran experiments on 4 NVIDIA Tesla V100 32G GPUs
skelly0311 t1_j9scr5c wrote
Reply to [D] Model size vs task complexity by Fine-Topic-6127
First thing to note. The best way to improve generalisability and accuracy is to have as accurate data as possible. If your data is trash, it doesn't matter how many parameters your classifier is using, it will not produce good results.
Now, in my experience using with transformer neural networks, If the task is a simple binary classification task or multi label with less than 8 or so labels(maybe more), the small models(14 million parameters) perform similar to the base models(110 million parameters). Once the objective function becomes more complicated, such as training a zero shot learner, more parameters means achieving a much lower loss. In the case just mentioned, using the large models(335 million parameters) had a significant improvement over the base model(110 million parameters).
It's hard to define and quantify how complicated an objective function is. But just know that the more parameters doesn't always mean better if the objective function is simple enough.
uristmcderp t1_j9scj7t wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
None of those concerns have to do with the intrinsic nature of machine learning, though. Right now it's another tool that can automate tasks previously thought impossible to automate, and sometimes it does that task much better than humans could. It's another wave like the Industrial Revolution and the assembly line.
Some people will inevitably use this technology to destroy things on a greater scale than ever before, like using the assembly line to mass produce missiles and tanks. But trying to put a leash on the technology won't accomplish anything because technology isn't inherently good or evil.
Now, if the state of ML were such that sentient AI was actually on the horizon, not only would this way of thinking be wrong, we'd need to rethink the concepts of humanity and morality altogether. But it's not. Not until these models manage to improve at tasks it was not trained to do. Not until these models become capable of accurate self-evaluation of its own performance without human input.
cumulonimbecile66 t1_j9scc8a wrote
This is pretty cool: https://playground.tensorflow.org/
ArnoF7 t1_j9sbjc8 wrote
Reply to comment by LetterRip in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yes, I am aware of the paper you linked, although I can’t say I am super familiar with the details.
This is very cool and solves some of the problems with robotics, but not a whole lot. Not discrediting the authors (especially Fei Xia, who I really admire as a robotics researcher. And of course Sergey Levine, who is probably my favorite), but the idea of fusing NLP and robotics to create a robot that can understand command and serve you is not super new. Even 10+ years ago there is this famous video from ROS developer Open Robotics (at the time it was still Willow Garage IIRC) in which they tell the robot to grab a bear and the robot will navigate the entire office and fetch it from the kitchen. Note that this is not the innovation these papers claim, (these papers are actually investigating a possibility instead of solving a problem) but I assume this is probably what everyone assumes to be the bottleneck of service robot, which in reality isn’t.
impossiblefork t1_j9sacbf wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I think in the opposite way: if alignment is possible, then alignment is profoundly dangerous.
If alignment is possible, then the AI can be aligned with the interest of the wealthy capital owners who fund its development, and can be used to basically control the world.
Meanwhile, if alignment is impossible, ordinary people who have access to these hypothetical future 'superintelligences' can convince these entities to do things that they like, but which are undesired by the model-owning class.
For this reason, if we are on some kind of path to super AI, the development of technology to permit value alignment must be prevented.
mdda t1_j9s8ptw wrote
Reply to comment by cccntu in [P] minLoRA: An Easy-to-Use PyTorch Library for Applying LoRA to PyTorch Models by cccntu
FWIW, I gave a shout out to minLoRA at our Machine Learning MeetUp (in Singapore) last night : https://redcatlabs.com/2023-02-23_MLSG_Frameworks/#/15/2
perspectiveiskey t1_j9s8578 wrote
Reply to comment by [deleted] in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> It's amazing to me how easily the scale of the threat is dismissed by you after you acknowledge the concerns.
I second this.
Also, the effects of misaligned AI can entirely be mediated by so called meat-space: an AI can sow astonishing havoc by simply damaging our ability to know what is true.
In fact, I find this to be the biggest danger of all. We already have a scientific publishing "problem" in that we have arrived at an era of diminishing returns and extreme specialization, I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".
I just watched this today where he talks about using automated code generation for code verification and tests. The man is brilliant and the field is brilliant but one thing is certain and that is that the complexity of far exceed individual humans' ability to fully comprehend.
Now combine that with this and you have a true recipe for disaster.
LetterRip t1_j9s7k0n wrote
Leptino t1_j9s73ml wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
One would have to consider the ultimate consequences (including paradoxical ones) of those things too.. Like would it really be catastrophic if social media became unusable for the average user? The 1990s are usually considered the last halycon era... Maybe thats a feature not a bug!
As far as drone swarms, those are definitely terrifying, but then there will be drone swarm countermeasures. Also, is it really much more terrifying than Russia throwing wave after wave of humans at machine gun nests?
I view a lot of the ethics concerns as a bunch of people projecting their fears into a complicated world, and then drastically overextrapolating. This happened with the industrial age, electricity, the nuclear age and so on and so forth.
Mefaso t1_j9s6l7i wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>Like, the same AIs that can cure cancer can also create highly dangerous bioweapons or nanotechnology.
A good example of this:
Mefaso t1_j9s66qq wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>I remember as recently as 2015 at ICLR/ICML/NIPS you’d get side-eye for even bringing up AGI.
You still do, imo rightfully so
testuser514 t1_j9s60rt wrote
It really depends on what you’re trying to learn
pyfreak182 t1_j9slq8a wrote
Reply to [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
It helps that the math behind back propagation (i.e. matrix multiplications) is easily parallelizable. The computations in the forward pass are independent of each other, and can be computed in parallel for different training examples. The same is true for the backward pass, which involves computing the gradients for each training batch independently.
And we have hardware accelerators like GPUs that are designed to perform large amounts of parallel computations efficiently.
The success of deep learning is just as much about implementation as it is theory.