Recent comments in /f/singularity

Talkat t1_ja5oo3h wrote

If we are doing brains in mass that is very different to a handful of ICU patients.

The brain would need to be fed a stream of nutrients and hormones along with oxygenated air.

Everything could be recycled locally.

So we are talking more like an automated fish tank + chemical needed to keep it running than a hospital .

So long term a few bucks per day.

2

Nervous-Newt848 t1_ja5miqj wrote

First off... Very interesting... But just so you know that wouldnt be a language model anymore

They dont really have a term for that other than multimodal... Multimodal world model???

Models cant speak or hear when they want to Its just not part of their programming

They respond to input

So if they are receiving continuous input... Theoretically they should be continuously outputting...

The whole conversation history could be saved into a database

Reward models are currently given texts with scores made by humans its called RLHF, or Reinforcement Learning from Human Feedback... AI doesnt do the scoring... That's for language models though...

How could they know what is good and what is bad???

Now for world models reinforcement learning works differently... Which is probably what youre referring to... I wont go into it because its pretty complex...

Updating its weights continuously is currently impossible due to an energy inefficiency problem with the von Neumann hardware architecture... Basically traditional cpus and gpus... More basically, it requires too many computations and too much electricity to continuously "backpropagate" (data science word) data input...

Conversations shouldn't be encoded into a language model either imo... because of "hallucinations" they may make some things up that didn't happen

Querying a database of old conversations is better and will always be more accurate

In order for an AGI to truly be AGI by definition it needs to be able to learn any task... This is currently possible manually server side through manual backpropagation... But this is not possible continuously like how human brains work...

Humans continuously learn...

An AI neural network manually learns by being fed data through a command line interface... This is called "training"... Data science terminology

An AI neural network model is then "deployed" aka opened and ran on a single gpu or multiple depending on model size... When a language model is running it is said to be in "inference mode"... More terminology

We need an entirely different hardware architecture in order to run AI Neural Networks in training and Inference mode simultaneously...

Photonics or Neuromorphic computing, perhaps a combination of both... These seem like the way forward in my opinion

2

CypherLH t1_ja5mgs5 wrote

Well presumably humans and animals ARE first labelling/categorizing but it happens at a very low level...our higher brain functions then act on that raw data. You still need that lower level base image recognition functionality to be in place though. Presumably AI could do something similar, have a higher-level model that takes input from a lower level base image recognition model.

​

From an AI/software perspective that base image recognition functionality will be extremely useful once inference costs come down.

2

play_yr_part t1_ja5ly4s wrote

This point is exactly why I'm so wary of AI being used a crutch in rapid time rather than a tool for gradual improvement. I'm very sceptical of the benefit of this new paradigm we're about to be ushered into, and that's not even thinking about the possibility of a rogue AI/Paperclip maximiser.

Gen Z are the most depressed generation, despite having free/cheap access to the collective cultural output of humanity , more freedom to love who they want and choose what path they want in life than ever before, better working conditions than in all of human history, the cleanest air since before the industrial revolution. Yet vast swathes of gen z and other generations are fucking miserable because the medium that is supposed to make us better connected is making us more atomised and lonely and fearful.

That's suddenly going to change for the better when most jobs have been eliminated? We're all going to live fulfilled lifes and go jet skiing and have peace and harmony when our government and/or benevolent AI overlord pays us our NEETbux?

1

RedguardCulture t1_ja5kwey wrote

My guess is that based on what Sam and OpenAI has said about robotics and their views on compute&scaling. The moment we have robotics of that capability, we're probably in a post ASI world to begin with. As in all our scientific discovery as that point is being done by intelligences far smarter than the whole of humanity. I say this because I get the impression that OpenAI thinks you're going to need a very powerful AI to solve the problem of how you go about stuffing a model that requires a server room of bleeding edge GPUs and the power to run it in a robo chastise or a car. I'm reminded of how constrained the size was for Deepmind's Gato because the model had to be able to run in realtime in a robot hand for example.

Anyway, I think this is why Sam said in a past interview he would sound like a crazy person if he started talking about what the world would could be like if we get AI right, harnessing a magnitude of intelligence that supersedes whatever humans could ever hope to achieve with their biological brains means a lot of scientific advances and or problems that isn't hard constrained by physics could be solvable over night.

2

whatsup5555555 t1_ja5jmyn wrote

So you’re in favor for half of your “team” to have a different political leaning then your own. It’s easy to say that you want a culturally diverse team and it’s another to actually assemble one. It’s easy to pick people on surface level features like skin color but it’s much more difficult to balance political ideology, hence the clear bias that the AI already exhibits. The tech industry is already heavily left leaning but I guess no one cares as long as your bias is the one winning. So keep fighting for your skewed view of equality!

5

whatsup5555555 t1_ja5hqkt wrote

Hahahahahah “can’t make this shit up “. Please elaborate on how idiot or fuck tard is discriminatory to a group of people. People like you are a absolute joke to everyone that doesn’t exist in your overly sensitive liberal bubble of extreme intolerance to any opinions outside your clown bubble of acceptance. So again I say hahahah you are a complete joke. Go cry in your safe space and continue to enjoy the smell of your own flatulence.

2

green_meklar t1_ja5ga1d wrote

You're definitely not the only one feeling that way. I totally understand where you're coming from and I think this is something a lot of people are going to have to face over the next few years, one way or another.

What the ultimate solution will be, I don't know. But for now, I suspect the healthy approach is to redefine your standards for success. Stop measuring the value of making games (or software in general, or anything in general) in terms of what you produce, and start measuring it in terms of what you achieve and how well you express yourself creatively. All the best games might be made by AI, but your game will still be the one you make yourself, even if some of the work you do feels redundant. So focus on that part and make that your goal. No one can express your own personal creativity better than you can.

We already have examples of this in other domains. Chess AIs have been playing at a superhuman level for over 20 years, but people still get satisfaction out of learning and playing Chess. People still paint pictures even though we have cameras that can take perfect full-color photographs. You'll never run a kilometer faster than Usain Bolt, or grow a garden better than the Gardens of Versailles, or write a better novel than Lord of the Rings, but that doesn't mean there isn't something for you to personally achieve in running, gardening, or writing. Hopefully programming can be like that too.

1

Dreikesehoch t1_ja5fjub wrote

I know, I read that. But I said that what we have now isn’t just “not quite there, yet”. It’s a totally different thing from what it should be. Animals don’t do scene or object recognition (i.e. labelling). Animals simulate actions on the visual stimuli to infer what actions they can apply to their surroundings physically or virtually and then after that there might be some symbolic labelling. Like when you look at a door: you don’t primarily see a door but you infer a list of actions that are applicable to the geometric manifold that a door represents. You might act on the door by opening it or closing it without even thinking consciously about the door. When you focus on it you can classify it as a door through the set of applicable actions. I am sure you can relate. There is some very interesting educational content about this on youtube.

2