Recent comments in /f/singularity

Anjz OP t1_jdvkutn wrote

It's not as good as ChatGPT but it's much lighter. Granted it's just a small copy of fine tuning from GPT-3 API, given more parameters for fine tuning on GPT-4 it would probably be a whole different beast. It has something like 10x less data if not more. We're fully capable of creating something much better, it's just a matter of open source figuring out how and catching up to these companies keeping the Krabby Patty secret formula. Turns out for profit companies don't like divulging world changing information, who woulda thought?

If you take a look at Youtube there are a couple demos from people running it on rPI, granted at the moment it's at a snails pace - this could be a different story a year or so from now. It works decently well with a laptop.

1

RubiksSugarCube t1_jdvjz6m wrote

Really depends on one's interpretation of the film. OS1 and the personality named Samantha that it rendered for Theodore can be viewed as either a benign and helpful companion or the apex of a strong AI capable of psychologically manipulating humans.

2

Anjz OP t1_jdvj7ry wrote

With it running on phones, laptops and raspberry pi's, a solar panel would be sufficient to power small devices.

If you've tried GPT-4 its propensity to hallucinate is so much less than previous iterations that errors would be negligible. We have Alpaca now, but could very well have something like GPT-4 locally in the near future if we look at the pace of how fast things are improving.

1

crua9 t1_jdvhsat wrote

It could happen through the education system. One thing you forget is humans die while AI stays alive forever giving the hardware and what not is fine. So over a number of generations it in theory is possible the AI can slowly open up future generations to allowing the AI to rule.

​

This actually might not be a bad thing other than those today. It's likely AI will be less corrupt if at all. It's likely to work for the people. It's likely to bring more protections.

​

Like think about it. Lets say an AI takes over today. If it wants to rule us, then generally it needs to make us happy. This means lower crime, depression, and other things. You can kill off everyone and that will solve the problem. BUT it won't be able to rule over us anymore and people will fight back unless if it's a generational thing where the birth rates slowly drop. The other is it works on solving poverty, try to make us a cashless society, where work isn't needed to stay alive, solve loneliness, and so on.

​

AI doesn't need cash or anything like that. So where current political people are heavily influence by

  1. what others think (like them being famous)
  2. money for themselves and other family
  3. etc

AI can focus on what is needed

1

shibui_ t1_jdvg17n wrote

I forgot what sub I was on. I’m so used to seeing people downplay these systems, but I find them so very useful and I see so much potential where most are scared. This is why! We can navigate our own minds with an intuitive system, or feedback system that can have great impacts for us to move forward. Love the comments on here.

2