Recent comments in /f/singularity

Jeffy29 t1_jdylw27 wrote

>It’s like arguing that a plane isn’t a real bird or a car isn’t a real horse, or a boat isn’t a real fish. Nobody cares as long as the plane still flies, the car still drives and the boat still sails.

Precisely. It's an argument that brain worm infested people engage on Twitter all day (not just AI but a million other things as well), but nobody in real world cares. Just finding random reasons to get mad because they are too bored and comfortable in their life so they have to invent new problems to get mad at. Not that I don't engage something in it as well, pointless internet arguments are addicting.

3

Jeffy29 t1_jdyl665 wrote

The original tweet is immensely dishonest and has a poor understanding of science. Key advancements in science often come because the environment allowed it to happen. This notion that scientists sit in the room and have some brilliant breakthrough in a vacuum is pure fiction and a really damaging stereotype because it causes young people to not pursue career in science because they think they can't think of any brilliant idea. Even Einstein very likely would have not discovered special and general relativity if key advancements in astronomy in late 19th century did not gave us much more accurate data about the universe. I mean look at the field of AI, you think it's a coincidence that all these advancements came right as the physical hardware, the GPU allowed us to test our theories? Of course not.

I do think a very early sign of ASI will be if model will independently solves a long-standing and well-understood problem in science or mathematics. Like for example one of the Millennium-Prize Problems, but absolutely nobody is claiming AI as we have it now is anywhere near that. The person is being immensely dishonest to either justify perpetuating hate or more likely in this case just drifting. There is a lot of money to be made if you take stance on any issue and scream it loud enough, regardless how much it has to do with reality.

A personal anecdote from my life. I have a friend who is very very successful, he is finishing up his PhD in computer science at one of the top universities in the world. He is actually not that keen on transformers or machine learning through a mass amount of data, he finds it a pretty dumb and inelegant approach, but week ago we were discussing GPT-4 and I was of course gushing over it and saying how this will allow all these things, his opinion still hasn't changed, but at that moment he surprised me he said that they've had access to GPT-3 for a long time through university and he and others have used it to brainstorm ideas, let it critique the research papers, discuss if there is something they missed they should have covered etc. If someone so smart, at the bleeding edge of mathematics and computer science, finds this tool useful (GPT-3 no less) as an aid for their research, then you have absolutely no argument. Cope and seethe all day but if this thing is useful in real-world doing real science, then what is your problem? Yeah, it isn't Einstein, nobody said it was.

4

SoylentRox t1_jdyhulw wrote

Fine let's spend a little effort debunking this:

From:

https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec

Intelligence is situational — there is no such thing as general intelligence.

This is empirically false and not worth debating. Current sota AI use very very very simplistic algorithms and are general, and slight changes to the algorithm result in large intelligence increases.

This is so wrong I will not bother with the rest of the claims, this author is unqualified

​

From:

https://globalprioritiesinstitute.org/wp-content/uploads/David-Thorstad-Against-the-singularity-hypothesis.pdf

Extraordinary claims require extraordinary evidence

- you could have "debunked" nuclear fission in 1943 with this argument and sat comfortably in the nice japanese city of hiroshima unworried. Sometimes you're just wrong.

Good ideas become harder to find

This is true but misleading. We have many good ideas, like fusion rocket engines, flying cars, genetic treatments to disable aging, nanotechnology. As it turns out the implementation is insanely complicated and hard. Sometime AI can do much better than us.

Bottlenecks

True but misleading. Each bottleneck can be reduced at an exponential rate. For example if we actually have AGI right now, we'd be building as many robots and AI accelerator chips as physically can, and also increasing the rate of production exponentially.

Physical constraints

True but misleading, the solar system has a lot of resources. Growth will stop when we have exhausted the entire solar system of accessible solid matter.

​

Sublinearity of intelligence growth from accessible improvements

True but again misleading, even if intelligence is sublinear we can make enormous brains, and there are many tasks, mentioned above, we as individual humans are too stupid to make short term progress on, so investors won't pay to develop them.

So even if the AGI system has 1 million times the computational power of a human being, but is "only 100" times as smarter and works 24 hours a day, it can still make possible to make working examples of many technologies in short timelines. Figure out biology and aging in 6 months of frenetic round the clock experiments using millions of separate robots. Figure out a fusion rocket engine by building 300,000 prototypes of fusion devices of various scales. And so on.

Human beings are not capable of doing this, no human alive can even hold in their head the empirical results of 300k engine builds and field geometries. So various humans have to "summarize" all the information and they will get it wrong.

1

ArthurParkerhouse t1_jdyhmof wrote

It's not a good moniker to be applied to LLMs or other transformer-based architectures currently working with protein folding algorithms. The thing is going to need to drop out of cyber high school and knock up a cyber girlfriend and raise a cyber baby in a cyber trailer before I'll accept that they're proper AI.

8

Imherehithere t1_jdygf6l wrote

People in this sub already treat it as a certainty as a way of coping with their depressing life. It gives them hope that their current way of life will change for the better with new advancements in ai technology.

5

flexaplext OP t1_jdydx39 wrote

Just hold it all in memory. My mental arithmetic and manipulation is actually rather decent, despite not being able to visualise it. You actually find that this applies to most people with aphantasia. There's lots of interesting things about it if you search and read up on people's perceptions and experience of it.

It's strange to describe.

Because I know exactly what something like a graph looks like, without being able to visualise it. Just by holding all the information about a graph in memory. I can manipulate that information by simply changing the information of the graph.

However, this ability does break down with more complex systems. If I try and hold an entire chess board in memory and manipulate it, I just fail completely. It's too much information for me to keep in memory and work out accurately without a visual aid.

2

iamtheonewhorox t1_jdyclah wrote

The primary argument that LLMs are "simply" very sophisticated next word predictors misses the point on several levels simultaneously.

First, there's plenty of evidence that that's more or less just what human brain-minds "simply" do. Or at least, a very large part of the process. The human mind "simply" heuristically imputes all kinds of visual and audio data that is not actually received as signal. It fills in the gaps. Mostly, it works. Sometimes, it creates hallucinated results.

Second, the most advanced scientists working in the field on these models are clear that they do not know how they work. There is a definite black box quality where the process of producing the output is "simply" unknown and possibly unknowable. There is an emergent property to the process and the output that is not directly related to the base function of next word prediction...just as the output of human minds is not a direct property of its heuristic functioning. There is a process of dynamic, self-organizing emergence at play that is not a "simple" input-output function,

Anyone who "simply" spends enough time with these models and pushes their boundaries can observe this. But if you "simply" take a reductionist, deterministic, mechanistic view of a system that is none of those things, you are "simply" going to miss the point.

3