Recent comments in /f/singularity
CertainMiddle2382 t1_jb9gdce wrote
Reply to comment by vivehelpme in What might slow this down? by Beautiful-Cancel6235
Hmm, its not like 2023 is a little bit unlike 2020 AI wise.
The very concept of singularity is self improving AI pushing into ASI.
I don’t get how you can trivialize a LLM seemingly starting to show competency in the very programming language it is written into.
What new particular characteristic of an AI would impress you more and show things are accelerating?
I believe humans get desensitized very quickly and when shown an ASI doing beyond standard model physics will still manage to say: so what? Ive been expecting for more since at least 6 months…
DixonJames t1_jb9gd6p wrote
Reply to comment by DungeonsAndDradis in What might slow this down? by Beautiful-Cancel6235
it may be too late for regulation, which was the best hope of slowing non secret AI, but this may have given the secret stuff an advantage, and let's face it better our overlords are robotic vacuum clraners than terminators...
vivehelpme t1_jb9flpl wrote
Reply to comment by CertainMiddle2382 in What might slow this down? by Beautiful-Cancel6235
>We are nearing self improving code IMO.
Ah, the recurrent boostrap-to-orbit meme. It's just around the corner, behind the self-beating dead horse.
ahtoshkaa2 t1_jb9czan wrote
Reply to comment by NothingVerySpecific in What might slow this down? by Beautiful-Cancel6235
Same) Haha. Thank god for ChatGPT:
The comment is referring to two different machine learning concepts: back-propagation and meta-back-propagation, and how they can be used to modify neural networks.
Back-propagation is a supervised learning algorithm used in training artificial neural networks. It is used to modify the weights and biases of the neurons in the network so that the network can produce the desired output for a given input. The algorithm uses gradient descent to calculate the error between the predicted output and the actual output, and then adjusts the weights and biases accordingly.
Meta-back-propagation is an extension of back-propagation that is used for meta-learning, which is learning to learn. It involves modifying the neural network so that it can learn to perform novel tasks more efficiently.
The comment also mentions using evolutionary techniques to cultivate a population of models in language models trained on code. This refers to using genetic algorithms to evolve a population of neural networks, where the best-performing networks are selected and combined to create new generations of networks. This process is known as evolution through large models.
ManosChristofakis t1_jb9b8fc wrote
Reply to What might slow this down? by Beautiful-Cancel6235
-
ai alignment. If a large scale attack is launched that tries to interfere with US nukes, you can bet your ass that ai will disappear from everyday life overnight. Obviously we dont have to get to such extreme cases for ai to be regulated or straight up not leaving the lab in the first place
-
human alignment. If ai progresses so fast that everyone loses their jobs, bussiness wont have any customers at all and all will go bankrupt including the ai making bussinesses themselves.
-
lack of training data , obviously
-
in case our hardware has reached or is close to reaching its limit in terms of efficiency, providing more computational capacity might require more hardware which might be less efficient in its use and makes computational power increase linearly instead of exponentially (also in such a case, cost might increase on par or faster than computational power)
-
limits of current architectures. Problems like hallucination. Also i read a paper that LLMs model their output to match the prompt given by its user. That is it will reply like a neuroscientist to a neuroscientist or a philosopher to a philosopher. This may limit many of its uses in places like healthcare because biases and people not knowing what they are talking about can make the ai reach wrong conlcusions. There may be other limitations which i/scientists themselves arent aware of yet.
-
costs. Obviously it takes a lot to buy and maintain the infastructure : cloud , GPUs , electricity and training are all significant costs right now with current LLMs which have parameters in the billions and deal only with text, but right now these costs are doable. Imagine if we try to create a multimodal ai that does the job of the engineer. It will require years or decades of training (because you cant speed up the training processes by cramming decades of training in human time to days like you do in the pc). it will maybe require hundreds of trillion (if not petas) of parameters and it would propably have to process information in real time which would propably be very expensive . You would also have to pay and maintain its robot body and accomodating infastructure. There propably are limits even with current LLMs. Current LLMs bill you per token of the robots reply aswell as its context. Best current LLMs have thousands of words of context and right now for every few replies you get it propably costs pennies (or less). But if you try to create a LLM that contains context of millions of words (for example a personal assistant or a robot friend ) the cost for every single reply , let alone continual replies, will be too prohibitive. This, assuming that these things are even possible
QuantumPossibilities t1_jb9apct wrote
Reply to comment by visarga in What might slow this down? by Beautiful-Cancel6235
The first 5nm chip in production was designed by a US company, Marvel. They are a fabless designer and yes, rely on TMSC and others as they diversify production. TMSC has understood their advantage is in production and has gone out of their way to not compete with the companies they manufacture product for. This manufacturing advantage will lessen as companies like Intel invest money in the high end lithography machines able to produce these specialized AI chips. I wouldn‘t count on chips being the limiting factor in the speed of AI adoption. As per usual, we’d have to anticipate they will continue to become more capable, more available and more affordable.
NothingVerySpecific t1_jb93eck wrote
Reply to comment by hassan789_ in What might slow this down? by Beautiful-Cancel6235
Sounds intriguing, got a link for the T nomelecture?
NothingVerySpecific t1_jb92tmn wrote
Reply to comment by visarga in What might slow this down? by Beautiful-Cancel6235
I understand some of those words
DragonForg t1_jb8wibd wrote
Reply to What might slow this down? by Beautiful-Cancel6235
The government cracking down on AI research for fear of AI. Kinda like the drug movement and how it stifled research into shit like psychedelics.
MSB3000 t1_jb8o7u8 wrote
Reply to What might slow this down? by Beautiful-Cancel6235
We already can't align our AI systems, or any technology for that matter. Right now it's actually a very familiar problem; machines don't do what you intend, they do what they're made to do. And this is basically fine because as of right now, there is nothing smarter in the known universe than human beings, and so we're still in charge.
But when the machines gain more intelligence than humans? Actual alignment is a totally unsolved problem, so we really do need that solved before we inadvertently create a superintelligent chatbot.
[deleted] t1_jb8niwt wrote
Reply to comment by Silly_Awareness8207 in What might slow this down? by Beautiful-Cancel6235
I don't see how any other architecture would solve that problem, that's just an issue of how current LLMs are trained
sungokoo t1_jb8m9jf wrote
Reply to What might slow this down? by Beautiful-Cancel6235
Apparently a lack of data. There’s a paper that hasn’t been peer reviewed that states we may run out of good data to train AI by 2026
No_Ninja3309_NoNoYes t1_jb8lduv wrote
Reply to What might slow this down? by Beautiful-Cancel6235
There are many different types of roadblocks that could occur in varying degrees of likelihood:
-
Lack of data. Data has to be good and clean. Cleaning and manipulation takes time. Purportedly Google research claims that compute and data have a linear relationship, but I think that they are wrong. Obviously, this is more of a gut feeling, yet IMO their conclusions were premature based on too few data points and self-serving.
-
Backprop might not scale. The thing is that you go down or back to propagate errors and try to account for them. That's like that game that some of you might have played where you whisper a word to someone else and he or she passes them on. IMO this will not work for large projects.
-
Network latency. As you add more machines the latency and Amdahl's law will limit progress. And of course hardware failure, round-off errors, and overflow can occur.
-
Amount of information you can hold. Networks can compress information but if you compress it too much, you will end up with bad results. There's exabytes of data on the Web. Processing it takes time and with eight bytes or less per parameters, you can have an exa parameters model in theory. However irl that isn't practical. Somewhere along the path, probably at ten trillion parameters, networks will stop growing.
-
Nvidia GPUs can do 9 teraflops. A trillion parameters model would allow about nine evaluations per second. Training is magnitudes more intense. As the needs for AI grow, supply and demand of compute will be mismatched. I mean, I was using three multi billion parameters models at the same time yesterday. And I was hungry for more. One of them was slow, the second gave insufficient output, and the third was hit and miss. If you upscale 10x, I think that I still would want more.
-
Energy requirements. With billions of simultaneous requests a second, you require a huge solar panels farm. That's maybe as many as seven solar panels, depending on conditions, per GPU.
-
Cost. GPUs could cost 40K each. Training GPT costs millions. With companies doing independent work, billions could be spent annually. Shareholders might prefer using the money elsewhere. It's not motivating for employees if the machines become the central part of a company.
Wedongfury t1_jb8kb30 wrote
Reply to What might slow this down? by Beautiful-Cancel6235
I don't think anything can slow down significantly the advent of AGI, if the US regulate it, other countries won't, it's doomed to happen. Think of China and India, their standards of living are constantly rising, at one point if you combine China and India, for every Andrej Karpathy born in the US, you will have 10 born in Asia.
nillouise t1_jb8h30n wrote
Reply to comment by angus_supreme in What might slow this down? by Beautiful-Cancel6235
China throw bomb in deepmind office may be can do that.
raccoon8182 t1_jb8ajjk wrote
Reply to What might slow this down? by Beautiful-Cancel6235
Ironically, COVID sped up the development of Ai. For once, we were in the comfort of our home not going into pointless meetings. And actually able to be productive.
Liberty2012 t1_jb88apt wrote
Reply to What might slow this down? by Beautiful-Cancel6235
As long as the hallucinations exist, it is going to come far short of the current hype. There can be no "trusted" applications of such AI's. I expect the hallucination problem is going to be very difficult to solve, some are suggesting we may need different architectures entirely.
The other issue, is that the bad and nefarious uses of AI are exploding and are going to be hard to contain. Hallucinations don't really hurt such cases, as when you are scamming with fake information, it is not an inhibitor.
This creates an unfortunate imbalance with far more destructive uses than we would like and no clear means to control them. This may lead to a real public disaster in the terms of favorable opinions on future AI development.
Especially if the deep fakes explode during election season. AI is going to be seen as an existential crisis for truth and reason.
Manticor3Theoriginal t1_jb863w9 wrote
Reply to comment by MrGoodGlow in What might slow this down? by Beautiful-Cancel6235
The problem isn't us going full scorched earth apocalypse
geostorm-style dude, its the most delicate natural ecosystems collapsing, leading to the extinction of endangered animals, for example: due to slight temperature increase, a little more topsoil in African wilds are loosened, leading to duststorms that force rare species of lions into possible extinction. (nothing to do with supply chains or the economy) It doesn't mean that climate change is good though. We should ALL try to vote for carbon neutral policies and be even a little bit more eco-friendly.
MrGoodGlow t1_jb7t3md wrote
Reply to comment by imnotabotareyou in What might slow this down? by Beautiful-Cancel6235
Then you're blind. There have been more historic storms,floods,fires, and record breaking (in both direction) disasters and other natural disasters in the last 2 years than the last 20.
imnotabotareyou t1_jb7qj1y wrote
Reply to comment by MrGoodGlow in What might slow this down? by Beautiful-Cancel6235
Lmao no it’s not
imnotabotareyou t1_jb7qfc0 wrote
Reply to What might slow this down? by Beautiful-Cancel6235
Ain’t no brakes on this hype train
MrGoodGlow t1_jb7ov2m wrote
Reply to comment by DungeonsAndDradis in What might slow this down? by Beautiful-Cancel6235
you forgot the most likely and most present. Climate change destroying our supply chain capacity
techhouseliving t1_jb7oku3 wrote
Reply to What might slow this down? by Beautiful-Cancel6235
Although it takes a supercomputer to initially train a model, it can run in a very small amount of memory and processing. Like 2 gigs of data is required for stable diffusion which can in theory create any 2d art conceivable. Similar for language models. It's the ultimate compression algorithm.
M1s and m2s are designed to run these models very efficiently. And those are pretty widely distributed.
MuseBlessed t1_jb7ngo8 wrote
Reply to What might slow this down? by Beautiful-Cancel6235
Goverment crackdown maybe? Or maybe the physical ability to build machines works, but there becomes an economic bottleneck where a more powerful AI stops being worth the expended resources, diminishing returns.
BusinessDisruptorsYT t1_jb9jep4 wrote
Reply to comment by [deleted] in What might slow this down? by Beautiful-Cancel6235
Can you share the video link?