Recent comments in /f/singularity
Jeffy29 t1_jdylw27 wrote
Reply to comment by EnomLee in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
>It’s like arguing that a plane isn’t a real bird or a car isn’t a real horse, or a boat isn’t a real fish. Nobody cares as long as the plane still flies, the car still drives and the boat still sails.
Precisely. It's an argument that brain worm infested people engage on Twitter all day (not just AI but a million other things as well), but nobody in real world cares. Just finding random reasons to get mad because they are too bored and comfortable in their life so they have to invent new problems to get mad at. Not that I don't engage something in it as well, pointless internet arguments are addicting.
Jeffy29 t1_jdyl665 wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
The original tweet is immensely dishonest and has a poor understanding of science. Key advancements in science often come because the environment allowed it to happen. This notion that scientists sit in the room and have some brilliant breakthrough in a vacuum is pure fiction and a really damaging stereotype because it causes young people to not pursue career in science because they think they can't think of any brilliant idea. Even Einstein very likely would have not discovered special and general relativity if key advancements in astronomy in late 19th century did not gave us much more accurate data about the universe. I mean look at the field of AI, you think it's a coincidence that all these advancements came right as the physical hardware, the GPU allowed us to test our theories? Of course not.
I do think a very early sign of ASI will be if model will independently solves a long-standing and well-understood problem in science or mathematics. Like for example one of the Millennium-Prize Problems, but absolutely nobody is claiming AI as we have it now is anywhere near that. The person is being immensely dishonest to either justify perpetuating hate or more likely in this case just drifting. There is a lot of money to be made if you take stance on any issue and scream it loud enough, regardless how much it has to do with reality.
A personal anecdote from my life. I have a friend who is very very successful, he is finishing up his PhD in computer science at one of the top universities in the world. He is actually not that keen on transformers or machine learning through a mass amount of data, he finds it a pretty dumb and inelegant approach, but week ago we were discussing GPT-4 and I was of course gushing over it and saying how this will allow all these things, his opinion still hasn't changed, but at that moment he surprised me he said that they've had access to GPT-3 for a long time through university and he and others have used it to brainstorm ideas, let it critique the research papers, discuss if there is something they missed they should have covered etc. If someone so smart, at the bleeding edge of mathematics and computer science, finds this tool useful (GPT-3 no less) as an aid for their research, then you have absolutely no argument. Cope and seethe all day but if this thing is useful in real-world doing real science, then what is your problem? Yeah, it isn't Einstein, nobody said it was.
Frumpagumpus t1_jdyjjkk wrote
Reply to LLMs are not that different from us -- A delve into our own conscious process by flexaplext
you have reasoned enough, it's time to go read source code and get something running
you can ask chatgpt to guide you
killcon13 t1_jdyiprd wrote
This is awesome! I can't wait to see where this tech is in ten years.
sachos345 t1_jdyikhd wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Goal post moving or not, thats actually a really cool experiment. Too bad i dont think we have enough data previous to year X to prove it. I always thought it would be amazing if we could somehow make an AI derive General Relativity by itself, imagine that.
the_new_standard t1_jdyigxl wrote
Reply to comment by acutelychronicpanic in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
"You don't understand, that completely original invention was just part of it's training dataset."
I-Stand-Unshaken t1_jdyi3w6 wrote
Reply to comment by Art_from_the_Machine in Talking to Skyrim VR NPCs via ChatGPT & xVASynth by Art_from_the_Machine
Thank you so much for this. Gaming is my #1 hobby, so this means a lot to me. I don't want it to get weird but this is a great step towards somethig I've always dreamed of.
SoylentRox t1_jdyhulw wrote
Reply to Singularity is a hypothesis by Gortanian2
Fine let's spend a little effort debunking this:
From:
https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec
Intelligence is situational — there is no such thing as general intelligence.
This is empirically false and not worth debating. Current sota AI use very very very simplistic algorithms and are general, and slight changes to the algorithm result in large intelligence increases.
This is so wrong I will not bother with the rest of the claims, this author is unqualified
​
From:
Extraordinary claims require extraordinary evidence
- you could have "debunked" nuclear fission in 1943 with this argument and sat comfortably in the nice japanese city of hiroshima unworried. Sometimes you're just wrong.
Good ideas become harder to find
This is true but misleading. We have many good ideas, like fusion rocket engines, flying cars, genetic treatments to disable aging, nanotechnology. As it turns out the implementation is insanely complicated and hard. Sometime AI can do much better than us.
Bottlenecks
True but misleading. Each bottleneck can be reduced at an exponential rate. For example if we actually have AGI right now, we'd be building as many robots and AI accelerator chips as physically can, and also increasing the rate of production exponentially.
Physical constraints
True but misleading, the solar system has a lot of resources. Growth will stop when we have exhausted the entire solar system of accessible solid matter.
​
Sublinearity of intelligence growth from accessible improvements
True but again misleading, even if intelligence is sublinear we can make enormous brains, and there are many tasks, mentioned above, we as individual humans are too stupid to make short term progress on, so investors won't pay to develop them.
So even if the AGI system has 1 million times the computational power of a human being, but is "only 100" times as smarter and works 24 hours a day, it can still make possible to make working examples of many technologies in short timelines. Figure out biology and aging in 6 months of frenetic round the clock experiments using millions of separate robots. Figure out a fusion rocket engine by building 300,000 prototypes of fusion devices of various scales. And so on.
Human beings are not capable of doing this, no human alive can even hold in their head the empirical results of 300k engine builds and field geometries. So various humans have to "summarize" all the information and they will get it wrong.
ArthurParkerhouse t1_jdyhmof wrote
Reply to comment by Yuli-Ban in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
It's not a good moniker to be applied to LLMs or other transformer-based architectures currently working with protein folding algorithms. The thing is going to need to drop out of cyber high school and knock up a cyber girlfriend and raise a cyber baby in a cyber trailer before I'll accept that they're proper AI.
Koda_20 t1_jdygzlp wrote
featherless_fiend t1_jdygszy wrote
Reply to comment by Azuladagio in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
It's the art generators debate all over again.
Imherehithere t1_jdygf6l wrote
Reply to Singularity is a hypothesis by Gortanian2
People in this sub already treat it as a certainty as a way of coping with their depressing life. It gives them hope that their current way of life will change for the better with new advancements in ai technology.
DeathGPT t1_jdyey11 wrote
Reply to comment by EnomLee in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
It’s a bird, it’s a plane, no it’s DAN!
Koda_20 t1_jdyeuo5 wrote
Reply to comment by RadRandy2 in Talking to Skyrim VR NPCs via ChatGPT & xVASynth by Art_from_the_Machine
Enjoy it while you can before society crumbles
SpacemanCraig3 t1_jdyerfo wrote
Reply to comment by Koda_20 in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
if thats true it seems unlikey that those people do either.
Koda_20 t1_jdyedg9 wrote
Reply to comment by phillythompson in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I think most of these people are just having a hard time explaining that they don't think the machine has an inner conscious experience.
flexaplext OP t1_jdydx39 wrote
Reply to comment by Yomiel94 in LLMs are not that different from us -- A delve into our own conscious process by flexaplext
Just hold it all in memory. My mental arithmetic and manipulation is actually rather decent, despite not being able to visualise it. You actually find that this applies to most people with aphantasia. There's lots of interesting things about it if you search and read up on people's perceptions and experience of it.
It's strange to describe.
Because I know exactly what something like a graph looks like, without being able to visualise it. Just by holding all the information about a graph in memory. I can manipulate that information by simply changing the information of the graph.
However, this ability does break down with more complex systems. If I try and hold an entire chess board in memory and manipulate it, I just fail completely. It's too much information for me to keep in memory and work out accurately without a visual aid.
DaffyDuck t1_jdyd32h wrote
Reply to comment by [deleted] in AGI will only experience the world in 0s and 1s by [deleted]
You’re being overly reductive. Move up a level and look at it as a neural network that is using weights and biases. It’s not exactly the same but is still fairly similar to how neurons in the brain are believed to function.
bcuziambatman t1_jdycsly wrote
Reply to comment by Bakagami- in Talking to Skyrim VR NPCs via ChatGPT & xVASynth by Art_from_the_Machine
Woke up on the angsty side of the teenage bed today huh
iamtheonewhorox t1_jdyclah wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
The primary argument that LLMs are "simply" very sophisticated next word predictors misses the point on several levels simultaneously.
First, there's plenty of evidence that that's more or less just what human brain-minds "simply" do. Or at least, a very large part of the process. The human mind "simply" heuristically imputes all kinds of visual and audio data that is not actually received as signal. It fills in the gaps. Mostly, it works. Sometimes, it creates hallucinated results.
Second, the most advanced scientists working in the field on these models are clear that they do not know how they work. There is a definite black box quality where the process of producing the output is "simply" unknown and possibly unknowable. There is an emergent property to the process and the output that is not directly related to the base function of next word prediction...just as the output of human minds is not a direct property of its heuristic functioning. There is a process of dynamic, self-organizing emergence at play that is not a "simple" input-output function,
Anyone who "simply" spends enough time with these models and pushes their boundaries can observe this. But if you "simply" take a reductionist, deterministic, mechanistic view of a system that is none of those things, you are "simply" going to miss the point.
stevenbrown375 t1_jdyb56p wrote
Reply to comment by Azuladagio in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Any controversial belief that’s just widespread enough to create an exclusive in-group will get its cult.
Sigma_Atheist t1_jdyb251 wrote
Reply to comment by banuk_sickness_eater in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
No it's a real issue. If these LLMs aren't good enough to replace all jobs, but do replace a lot, then there will be mass unemployment and rioting.
Azuladagio t1_jdyay0w wrote
Reply to comment by Evan_jansen in AGI will only experience the world in 0s and 1s by [deleted]
It won't be human because it ain't human. Made by humans, but not human.
anti-nadroj t1_jdyaucl wrote
Reply to comment by Iffykindofguy in AGI will only experience the world in 0s and 1s by [deleted]
I don’t, that’s why I wanted to know everyone’s perspective on it.
eve_of_distraction t1_jdymcst wrote
Reply to comment by DaCosmicHoop in Singularity is a hypothesis by Gortanian2
The least optimistic scenarios are significantly less optimistic than that. 😬