Recent comments in /f/singularity
Ezekiel_W OP t1_jbfd04h wrote
>Researchers at the University of East Anglia have developed a new drug that works against all of the main types of primary bone cancer.
>
>The team used next generation sequencing to identify types of genetic regulators called small RNAs that were different during the course of bone cancer progression.
>
>The breakthrough drug increases survival rates by 50 per cent without the need for surgery or chemotherapy. And unlike chemotherapy, it doesn't cause toxic side effects like hair loss, tiredness and sickness.
Rofosrofos t1_jbf7eh5 wrote
Reply to What might slow this down? by Beautiful-Cancel6235
Hopefully government regulation will slow progress down to a point that we can focus on AI safety and alignment, let's get those sorted so that we don't get ourselves killed in a couple of years.
vivehelpme t1_jbexk7n wrote
Reply to comment by CertainMiddle2382 in What might slow this down? by Beautiful-Cancel6235
>As you know well zero shot learning algorithms beat anything else
It doesn't create a better training set out of nothing.
> it allows them to explore part of the gaming landscape that were never explored by humans.
Based on generalizing a premade dataset, made by humans.
If an AI could just magically zero-shot a better training set out of nowhere we wouldn't bother making a training set, just initailize everything to random noise and let the algorithm deus-ex-machina it to superintelligence out of randomness.
>What is the testable characteristics that would satisfy you to declare the existence of an ASI?
Something completely independent is a good start for calling it AGI and then we can start thinking if ASI is a definition that matters.
>For me it is easy, higer IQ than any living human, by defnition. Would that change something, you can argue it doesnt, I bet it will change everything.
So IQ test solving AI are superintelligent despite not being able to tell a truck apart from a house?
CertainMiddle2382 t1_jbeq0si wrote
Reply to comment by vivehelpme in What might slow this down? by Beautiful-Cancel6235
You are obviously mistaken.
As you know well zero shot learning algorithms beat anything else, saw a DeepMind analysis postulating that it allows them to explore part of the gaming landscape that were never explored by humans.
And you seem to be moving lampposts as you move along.
What is the testable characteristics that would satisfy you to declare the existence of an ASI?
For me it is easy, higer IQ than any living human, by defnition. Would that change something, you can argue it doesnt, I bet it will change everything.
play_yr_part t1_jbepz8j wrote
Reply to comment by Baturinsky in What might slow this down? by Beautiful-Cancel6235
fraud isn't going to stop it. Terrorism depending on the scale of attack may halt it though.
vivehelpme t1_jbep9br wrote
Reply to comment by CertainMiddle2382 in What might slow this down? by Beautiful-Cancel6235
>I dont get your point.
I guess my point is superintelligent by your definitions
>The programmer doesn’t speak Klingon though the program can write good Klingon.
It have generalized a human made language.
>AlphaZero programmers don’t play go though the program can beat the best human go players in the world.
It plays at a generalized high-elite level. It's also a one trick pony. It's like saying a chainsaw is superintelligent because it can be used to saw down a tree faster than any lumberjack does with an axe.
>« Super intelligent AI » will then by definition only need to show a higher IQ than either its programmers or the smartest human.
So we could make an alphago that only solve IQ test matrices, it will be superintelligent by your definition but it will be trash at actually being intelligent.
>I really dont see the discussion here, these are only definitions.
Yes, and the definition is that AI is trained on the idea of generalized mimicry, it's all about IMITATION. NOT INNOVATION.
This is all there is, you caulculate a loss value based on how far from a human defined gold standard the current iteration lands and edit things to get closer. Everything we have produced in wowy AI is about CATCHING UP to human ability, there's nothing in our theories or neural network training practices that are about EXCEEDING human capabilities.
The dataset used to train a neural network is the apex of performance that it can reach. You can at best land at a generalized consistently very smart human level.
Beautiful-Cancel6235 OP t1_jbe6946 wrote
Reply to comment by Tiamatium in What might slow this down? by Beautiful-Cancel6235
I love quantum effects and, hopefully, our brain is just that special and those effects can’t be easily replicated.
It’s this sort of stuff that I wish would slow down:
Granted they’ve had this technology for a few years now but AI has sped it up.
Tiamatium t1_jbdk7us wrote
Reply to What might slow this down? by Beautiful-Cancel6235
Few things, depending on what you mean by "it". If you're talking about AGI, then I could come up with a small list actually:
-
Funding and cost of AI in terms of work-results. If we realize that AI of intelligence of a mouse or a stupid dog can do everything and anything we need, and it's rather simple to create an AI like that, but it's a lot harder to create an AI of human level intelligence, there simply won't be any financial insensitive to create a smarter AI, and frankly, I see this as most likely possibility.
-
Large scale military conflict in Eastern Asia, say if China invades Taiwan or North Korea invades South. Our chip manufacturing capabilities are concentrated in that one small reagion, and this is in a way Taiwan's insurance policy.
-
Now this is the interesting stuff. It's perfectly possible that consciousness is more complex that we thing. There are few very well respected scientists that believe consciousness might be a result of weird quantum effects (in a way, a biological quantum computer), in which case our AI is further from AGI than most people thing. It's important to move that quantum effects emerge all the time in biochemistry, for example in the unholy union of physics, chemistry and biology known as Photosynthesis, where in each step of the process, from the moment energy is collected in antenna complex, it uses quantum effects.
CypherLH t1_jbddriq wrote
Reply to comment by s2ksuch in What might slow this down? by Beautiful-Cancel6235
A real war in Taiwan will likely disrupt sea trade routes to South Kore and Japan as well. If nothing else insurance costs will soar for shipping companies, increasing transportation costs. Worst case if the war is wide enough the broader western Pacific could be a maritime war zone and more deeply cut transportation links.
Ishynethetruth t1_jbd5xf9 wrote
Reply to What might slow this down? by Beautiful-Cancel6235
It’s slowing dow Already
Yomiel94 t1_jbcxfi8 wrote
Reply to comment by MSB3000 in What might slow this down? by Beautiful-Cancel6235
>machines don't do what you intend, they do what they're made to do.
It seems like, whether you use top-down machine-learning techniques to evolve a system according to some high-level spec or you use bottom-up conventional programming to rigorously and explicitly define behavior, what’s unspecified (ML case) or misspecified (conventional case) can bite you in the ass lol… it’s just that ML allows you to generate way more (potentially malignant) capability in the process.
There’s also possible weird inner-alignment cases where a perfectly specified optimization process still produces a misaligned agent. It seems increasingly obvious that we can’t just treat ML as some kind of black magic past a certain capability threshold.
Manticor3Theoriginal t1_jbcnza0 wrote
Reply to comment by MrGoodGlow in What might slow this down? by Beautiful-Cancel6235
Jesus Christ, seeming very unlikely that we will avoid a collapse! I've heard that a really effective way would be for governments to suddenly embrace scientific progress, basing laws off of social studies and technologies.
MrGoodGlow t1_jbcewa2 wrote
Reply to comment by Manticor3Theoriginal in What might slow this down? by Beautiful-Cancel6235
Appreciate your reply. I apologize for my jaded view on your stance of protected animals. If I could I'd rephrase it to "animals that most don't care about".
We live in the environment.
I'm mobile right now, so can't provide sources (but literally Google any soundbite I'm about to spew and a main stream source will cite it).
Supply chain collapse will likely occur before "Venus" by Thursday.
Our entire economic model of logistics is set up in two underlying principals over the last 50ish years.
"Just In time delivery" and consolidating regional factories into mega global factories.
Essentially we've exchanged resiliency for efficiency. This is bad because as climate change disasters ramp up they cause massive disruptions.
Example. During the Texas freeze a couple years ago the world's largest PVC supplier (somewhere around 57%) shut down for about a month and it causes a whiplash effect that impacted the globe for about six months afterwords. (1)
Last year there was a freak hurricane near Oman that had it hit about a hundred miles north would have impacted 20% of oil production in the world.
This summer major rivers in China, Europe, and U.S to name a few. The Mississippi was so low this summer that we had a massive backlog of barges that couldn't transport up and down the river and we had to expend a lot of resources dredging the river. (2)
Natural disasters are costing more and more. Something like the last 5 years of hurricanes alone have cost as much as the previous 20 years before that.
In addition our energy return on investment for oil (what our entire global economy is built on, and renewables will take decades to even possibly replace) is diminishing .
Canada had major roads wiped out, Pakistan flooded, the heat dome over Canada that killed over a billion sea creatures.
It really is a math equation. There will be a point where the cost of repairing and rebuilding will not exceed the damage natural disasters will cause.
We won't be able to focus on building new and better technology as we're simply trying to survive the next disaster right around the corner. Our technology systems require massive global efforts and factory specialization.
2 https://www.reuters.com/world/us/us-barge-backlog-swells-parched-mississippi-river-2022-10-04/
Manticor3Theoriginal t1_jbc85jv wrote
Reply to comment by MrGoodGlow in What might slow this down? by Beautiful-Cancel6235
You know what, thats a really good point and I did not see a lot of that before now, thanks bro, hoping I can be a little more correct with what I say in the future. But to be fair, I really care about the environment and those animals.
RabidHexley t1_jbb8lgk wrote
Reply to comment by DragonForg in What might slow this down? by Beautiful-Cancel6235
Would be surprised given this isn't isolated to the States. Anyplace that can acquire GPUs can theoretically perform AI research. And the potential bad outcomes of AI development don't really care about geographic location, so there's not any benefit to stopping the research being done here.
RabidHexley t1_jbb7h2z wrote
Reply to comment by DixonJames in What might slow this down? by Beautiful-Cancel6235
It being open seems unequivocally better in my eyes, even outside of being optimistic towards technological progress.
It's better for lots of actors to actually know what the cutting-edge actually is. More eyes means more solutions and scrutiny. We want all the best minds possible looking at this stuff.
Outside of actively outlawing ALL development on machine learning and neural networks (basically tracking down anything that looks remotely like neural network development and sending them to prison), and going to war with nations who don't comply, this isn't the kind of tech you can stop, only slow down and push into the shadows or other people's hands. And if you're concerned about uncontrollable AI agents that's not a remotely better situation to be in, even if you've slowed the tech's progress by however many years.
MrGoodGlow t1_jbb6l6k wrote
Reply to comment by Manticor3Theoriginal in What might slow this down? by Beautiful-Cancel6235
We're already seeing impacts to humans. It's not going to just be some species you don't care about.
Cocoa losses at 38%
Coffee output down 20%
Wheat is also suffering
Sugarcane down 18%
https://www.reuters.com/article/us-brazil-sugar-crops-idUSKBN2FB1QG
Mustard production down 20ish% percent
https://www.bbc.com/news/world-europe-61529874
Natural disasters forced an estimated 3.4 million people in the U.S. to leave their homes in 2022, according to Census Bureau data collected earlier this year
Keblue t1_jbb27v4 wrote
Reply to comment by [deleted] in What might slow this down? by Beautiful-Cancel6235
pass the link homie
SWATSgradyBABY t1_jbaeyet wrote
Reply to What might slow this down? by Beautiful-Cancel6235
The govt can't regulate because the govt is controlled by the companies making the tech. You're expecting the companies involved in the race to regulate themselves. That's not a very logical expectation.
Beautiful-Cancel6235 OP t1_jba57w9 wrote
Reply to comment by DragonForg in What might slow this down? by Beautiful-Cancel6235
Where?
IluvBsissa t1_jb9rtlm wrote
Reply to comment by No_Ninja3309_NoNoYes in What might slow this down? by Beautiful-Cancel6235
I don't think we will need more computing power to reach AGI in 10-20 years.
[deleted] t1_jb9q4aa wrote
Reply to What might slow this down? by Beautiful-Cancel6235
[deleted]
CertainMiddle2382 t1_jb9m08p wrote
Reply to comment by vivehelpme in What might slow this down? by Beautiful-Cancel6235
I dont get your point.
The programmer doesn’t speak Klingon though the program can write good Klingon. AlphaZero programmers don’t play go though the program can beat the best human go players in the world.
By definition being better than a human at something means being « super intelligent » at that task.
Intelligence theory postulates G, and that it can be approximated with IQ test.
« Super intelligent AI » will then by definition only need to show a higher IQ than either its programmers or the smartest human.
Nothing else.
Postulating the existence of G, it is well possible that ASI (by definition again) will be better at other tasks not tested by the IQ test.
Rewriting a better IQ version of itself for example.
Recursively.
I really dont see the discussion here, these are only definitions.
vivehelpme t1_jb9jvtl wrote
Reply to comment by CertainMiddle2382 in What might slow this down? by Beautiful-Cancel6235
>I don’t get how you can trivialize a LLM seemingly starting to show competency in the very programming language it is written into.
The person who wrote the training code already had competency in that language, that didn't make the AI-programmer duo superhuman.
And then you decide to train the AI on the output of that programmer, so the AI-programmer duo will be just the AI, but from where does it learn to innovate into a superhuman superai super-everything state? It can generalize what a human can do, well, that's good, but its creator could also generalize what a human can do.
Where is the miracle in this equation? You can train the AI on machine code and self modify until perhaps the code is completely impossible to troubleshoot by human beings but the system runs itself on 64 GPUs instead of 256. That makes it cheaper to run, it doesn't make it smarter.
​
>The very concept of singularity is self improving AI pushing into ASI.
That's an interpretation, a scenario. The core of it all comes from staring at growth graphs too long and realizing that exponential growth might exceed human capacity to follow.
Wikipedia says :
>The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.
But how is that really different from:
>The technological singularity—or simply the singularity[1]—is a statistical observation of current state of society where growth at a large scale have resulted in innovation and data collection rates that exceed the unaided human attention span, some claim this might result in unforeseeable changes to human civilization. On a global scale this is generally agreed to have happened around the invention of writing thousands of years ago(as there's exits too much text for anyone to read in a lifetime) but some argue that this coincides with the more invention of the internet as only then did you have the option to interactively access the global state of innovation and progress and realize that you cannot keep up with it even if you spent 24 hours a day reading scientific articles.[2] An online subculture argues that superhuman AI would be require for this statistical observation to be really true(see: no true Scotsman fallacy), despite their own admitted inability to even follow the realtime innovation rate in just their field of worship: AI.
ihateshadylandlords t1_jbfesrc wrote
Reply to Breakthrough drug works against all the main types of primary bone cancer by Ezekiel_W
> But a new study published today shows how a new drug called 'CADD522' blocks a gene associated with driving the cancer's spread, in mice implanted with human bone cancer. >The drug is now undergoing formal toxicology assessment before the team assemble all of the data and approach the MHRA for approval to start a human clinical trial.
So many things work in mice, but not in humans. Hopefully this one gets cleared and works in humans.
!RemindMe 7 years