Recent comments in /f/singularity

CertainMiddle2382 t1_jb9gdce wrote

Hmm, its not like 2023 is a little bit unlike 2020 AI wise.

The very concept of singularity is self improving AI pushing into ASI.

I don’t get how you can trivialize a LLM seemingly starting to show competency in the very programming language it is written into.

What new particular characteristic of an AI would impress you more and show things are accelerating?

I believe humans get desensitized very quickly and when shown an ASI doing beyond standard model physics will still manage to say: so what? Ive been expecting for more since at least 6 months…

4

ahtoshkaa2 t1_jb9czan wrote

Same) Haha. Thank god for ChatGPT:

The comment is referring to two different machine learning concepts: back-propagation and meta-back-propagation, and how they can be used to modify neural networks.

Back-propagation is a supervised learning algorithm used in training artificial neural networks. It is used to modify the weights and biases of the neurons in the network so that the network can produce the desired output for a given input. The algorithm uses gradient descent to calculate the error between the predicted output and the actual output, and then adjusts the weights and biases accordingly.

Meta-back-propagation is an extension of back-propagation that is used for meta-learning, which is learning to learn. It involves modifying the neural network so that it can learn to perform novel tasks more efficiently.

The comment also mentions using evolutionary techniques to cultivate a population of models in language models trained on code. This refers to using genetic algorithms to evolve a population of neural networks, where the best-performing networks are selected and combined to create new generations of networks. This process is known as evolution through large models.

7

ManosChristofakis t1_jb9b8fc wrote

  1. ai alignment. If a large scale attack is launched that tries to interfere with US nukes, you can bet your ass that ai will disappear from everyday life overnight. Obviously we dont have to get to such extreme cases for ai to be regulated or straight up not leaving the lab in the first place

  2. human alignment. If ai progresses so fast that everyone loses their jobs, bussiness wont have any customers at all and all will go bankrupt including the ai making bussinesses themselves.

  3. lack of training data , obviously

  4. in case our hardware has reached or is close to reaching its limit in terms of efficiency, providing more computational capacity might require more hardware which might be less efficient in its use and makes computational power increase linearly instead of exponentially (also in such a case, cost might increase on par or faster than computational power)

  5. limits of current architectures. Problems like hallucination. Also i read a paper that LLMs model their output to match the prompt given by its user. That is it will reply like a neuroscientist to a neuroscientist or a philosopher to a philosopher. This may limit many of its uses in places like healthcare because biases and people not knowing what they are talking about can make the ai reach wrong conlcusions. There may be other limitations which i/scientists themselves arent aware of yet.

  6. costs. Obviously it takes a lot to buy and maintain the infastructure : cloud , GPUs , electricity and training are all significant costs right now with current LLMs which have parameters in the billions and deal only with text, but right now these costs are doable. Imagine if we try to create a multimodal ai that does the job of the engineer. It will require years or decades of training (because you cant speed up the training processes by cramming decades of training in human time to days like you do in the pc). it will maybe require hundreds of trillion (if not petas) of parameters and it would propably have to process information in real time which would propably be very expensive . You would also have to pay and maintain its robot body and accomodating infastructure. There propably are limits even with current LLMs. Current LLMs bill you per token of the robots reply aswell as its context. Best current LLMs have thousands of words of context and right now for every few replies you get it propably costs pennies (or less). But if you try to create a LLM that contains context of millions of words (for example a personal assistant or a robot friend ) the cost for every single reply , let alone continual replies, will be too prohibitive. This, assuming that these things are even possible

3

QuantumPossibilities t1_jb9apct wrote

The first 5nm chip in production was designed by a US company, Marvel. They are a fabless designer and yes, rely on TMSC and others as they diversify production. TMSC has understood their advantage is in production and has gone out of their way to not compete with the companies they manufacture product for. This manufacturing advantage will lessen as companies like Intel invest money in the high end lithography machines able to produce these specialized AI chips. I wouldn‘t count on chips being the limiting factor in the speed of AI adoption. As per usual, we’d have to anticipate they will continue to become more capable, more available and more affordable.

6

MSB3000 t1_jb8o7u8 wrote

We already can't align our AI systems, or any technology for that matter. Right now it's actually a very familiar problem; machines don't do what you intend, they do what they're made to do. And this is basically fine because as of right now, there is nothing smarter in the known universe than human beings, and so we're still in charge.

But when the machines gain more intelligence than humans? Actual alignment is a totally unsolved problem, so we really do need that solved before we inadvertently create a superintelligent chatbot.

3

No_Ninja3309_NoNoYes t1_jb8lduv wrote

There are many different types of roadblocks that could occur in varying degrees of likelihood:

  1. Lack of data. Data has to be good and clean. Cleaning and manipulation takes time. Purportedly Google research claims that compute and data have a linear relationship, but I think that they are wrong. Obviously, this is more of a gut feeling, yet IMO their conclusions were premature based on too few data points and self-serving.

  2. Backprop might not scale. The thing is that you go down or back to propagate errors and try to account for them. That's like that game that some of you might have played where you whisper a word to someone else and he or she passes them on. IMO this will not work for large projects.

  3. Network latency. As you add more machines the latency and Amdahl's law will limit progress. And of course hardware failure, round-off errors, and overflow can occur.

  4. Amount of information you can hold. Networks can compress information but if you compress it too much, you will end up with bad results. There's exabytes of data on the Web. Processing it takes time and with eight bytes or less per parameters, you can have an exa parameters model in theory. However irl that isn't practical. Somewhere along the path, probably at ten trillion parameters, networks will stop growing.

  5. Nvidia GPUs can do 9 teraflops. A trillion parameters model would allow about nine evaluations per second. Training is magnitudes more intense. As the needs for AI grow, supply and demand of compute will be mismatched. I mean, I was using three multi billion parameters models at the same time yesterday. And I was hungry for more. One of them was slow, the second gave insufficient output, and the third was hit and miss. If you upscale 10x, I think that I still would want more.

  6. Energy requirements. With billions of simultaneous requests a second, you require a huge solar panels farm. That's maybe as many as seven solar panels, depending on conditions, per GPU.

  7. Cost. GPUs could cost 40K each. Training GPT costs millions. With companies doing independent work, billions could be spent annually. Shareholders might prefer using the money elsewhere. It's not motivating for employees if the machines become the central part of a company.

3

Wedongfury t1_jb8kb30 wrote

I don't think anything can slow down significantly the advent of AGI, if the US regulate it, other countries won't, it's doomed to happen. Think of China and India, their standards of living are constantly rising, at one point if you combine China and India, for every Andrej Karpathy born in the US, you will have 10 born in Asia.

1

Liberty2012 t1_jb88apt wrote

As long as the hallucinations exist, it is going to come far short of the current hype. There can be no "trusted" applications of such AI's. I expect the hallucination problem is going to be very difficult to solve, some are suggesting we may need different architectures entirely.

The other issue, is that the bad and nefarious uses of AI are exploding and are going to be hard to contain. Hallucinations don't really hurt such cases, as when you are scamming with fake information, it is not an inhibitor.

This creates an unfortunate imbalance with far more destructive uses than we would like and no clear means to control them. This may lead to a real public disaster in the terms of favorable opinions on future AI development.

Especially if the deep fakes explode during election season. AI is going to be seen as an existential crisis for truth and reason.

1

Manticor3Theoriginal t1_jb863w9 wrote

The problem isn't us going full scorched earth apocalypse
geostorm-style dude, its the most delicate natural ecosystems collapsing, leading to the extinction of endangered animals, for example: due to slight temperature increase, a little more topsoil in African wilds are loosened, leading to duststorms that force rare species of lions into possible extinction. (nothing to do with supply chains or the economy) It doesn't mean that climate change is good though. We should ALL try to vote for carbon neutral policies and be even a little bit more eco-friendly.

8

techhouseliving t1_jb7oku3 wrote

Although it takes a supercomputer to initially train a model, it can run in a very small amount of memory and processing. Like 2 gigs of data is required for stable diffusion which can in theory create any 2d art conceivable. Similar for language models. It's the ultimate compression algorithm.

M1s and m2s are designed to run these models very efficiently. And those are pretty widely distributed.

2

MuseBlessed t1_jb7ngo8 wrote

Goverment crackdown maybe? Or maybe the physical ability to build machines works, but there becomes an economic bottleneck where a more powerful AI stops being worth the expended resources, diminishing returns.

1