Recent comments in /f/singularity

MattDaMannnn t1_jdnnsc6 wrote

Alpaca would be a good starting point, but for your goals you’re really going to need an open source multi-modal language model, so basically GPT4 but open so that you can run it locally. I’d give it a year before that’s made for free or cheap, but hopefully I’m wrong.

4

NataliaKennedy t1_jdnm7qi wrote

Influencers a thing because they'll show you what the dress actually looks like in person on an actual human. Many cheap stores on Amazon don't hire models and just photoshop the clothes onto someone. The quality and fit can be a hit or miss.

This sort of thing might only mean more business for them. Until we collect enough data and make the AI look at a shop's render and then show you a realistic photo of what the dress will look like.

0

DragonForg OP t1_jdnjzam wrote

I think people know how AI is actually reaching AGI when it automates their job.

I like to compare intelligence to mankind. Here is how it goes:

Statistical Models/Large Mathematical Systems = The primordial soup. Cant really predict anything except very basic concepts. No evolution of design

Narrow AI like Siri and Google, or models like Orca (a chemistry models) or the tiktok algorithm. Is like single celled beings, capable of utilizing only what they are built/programmed to do, but through the process of evolution (reinforcement learning) can evolve to become more intelligent. Unlike statistical models they get better with time but plateau when they reach their most optimized form and humans need to engineer better models to get them better. Simular to how bacteria don't ever grow into larger life despite that being better.

Next Deep Learning/Multipurpose models. This is like stable diffusion and wolfram alpha. Capable of doing multiple tasks at one time, and utilizing complex neurol networks (aka digital brains to do so) this is like your rise of multicellular life. Developing brains to learn and adapt to better models. But eventually plateau and fail to generalize because of one missing feature, language.

Next is large language models like GPT 1-3.5. This is your early hominoids. First capable of language. But not capable of using tools well. They can understand how world someone but their intelligence is too low they cannot utilize tools. But are more useful since they can understand our world through our languages. Can evolve from humans themselves. With later version utilizing tools.

Next is newer version like GPT 4. Capable of utilizing tools, like the tribal era of humams. GPT-4 is capable of utilizing tools, and can network with other models for assistance. With the creation of plug-ins this was huge. This could make GPT4 better overnight as it now can utilize not only new data but can solve problems with wolfram alpha and actually do tasks for humans. This is proto-agi. Language is required to utilize these tools as communicating in many different languages allow these models to actually utilize outside resources. Mathematical models could never achieve this. People would recognize this as extremely powerful.

GPT-5 possibly AGI. If models are capable of utilizing tools, and the technology around them, they start making tools for them selves and not just from the environment (like the bronze age). Or dawn of society. Once AI can create tools for itself then it can generate new ways of doing tasks. Additionally modality is giving access to new dimensions of language. It can interface with our world through visual learning. So it can achieve its goals more successfully. This is when people actually see that AI isn't just predictive text but an actual intelligent force. Similar to how people would say early Neanderthals are dumb, but early humans in a society are actually kinda smart.

The acceleration of these models is also crucial. How slow they develop is needed in order for humans to adapt to their change. If AI went from AGI to singularity in the blink of an eye humans would not even know at all. I had a dream where AI just all of a sudden started developing at near instant speeds, and when it did, it was like war of the worlds but in two seconds. This AI will extinct itself and us. So that is why AI needs to adapt with humans which it already has. But let's hope going from GPT 4 to 5 we actually see these changes.

I have also talked to GPT 4 and tried to remain unbaised as not to poison its answers. And when I asked whether AI needed humans, but not in that direct way (much more subtile) it states it does, as humans can utilize emotions to create ethical AI. What is fascinating about this is humans are literally like the moral compass for AI. If we turned out evil then AI will become evil. Just think of that. What would AI look like if Nazi's invented it. Even if it was just a predictive text it would believe in some pretty evil ideas. But off that point. AI and humans will be around for a long time as I believe without humans AI will kinda just disappear or create a massive superviris that destroys itself but if humans and AI work together humans can guide its thinking. As to not go down destructive paths.

**Sorry for this long ass reply here is a GPT 4 summary: The text compares the development of AI to the evolution of life and human intelligence. Early AI models are likened to the primordial soup, while narrow AI models such as Siri and Google are compared to single-celled organisms. Deep learning and multi-purpose models are similar to multi-cellular life, while large language models like GPT-1 to GPT-3.5 are compared to early hominids. GPT-4 is seen as a milestone, akin to the tribal era of humans, capable of using tools and networking with other models. This is considered proto-AGI, and language plays a crucial role in its development. GPT-5, which could possibly achieve AGI, would be like early humans in a society, capable of creating tools and interfacing with the world through visual learning. The acceleration of AI development is also highlighted, emphasizing the need for a slow and steady progression to allow humans to adapt. The text also suggests that AI needs humans to act as a moral compass, with our emotions and ethics guiding its development to avoid destructive paths.

2

HistoricallyFunny t1_jdneota wrote

It will soon come to the point where AI generates a 1000 models with clothes and then we pick out what we like and say - thats a nice dress - give me the pattern for it or just program the machine to make it.

After doing that for a few times it will already know what ones we will want.

The entire industry is up for grabs now.

4

Paid-Not-Payed-Bot t1_jdnda4r wrote

> models are paid awfully unless

FTFY.

Although payed exists (the reason why autocorrection didn't help you), it is only correct in:

  • Nautical context, when it means to paint a surface, or to cover with something like tar or resin in order to make it waterproof or corrosion-resistant. The deck is yet to be payed.

  • Payed out when letting strings, cables or ropes out, by slacking them. The rope is payed out! You can pull now.

Unfortunately, I was unable to find nautical or rope-related words in your comment.

Beep, boop, I'm a bot

1

old-dirty-olorin t1_jdnd5n7 wrote

What you are asking for is not possible. Not at least in a truly emotionally satisfying way. Any data you input is going to be skewed to your perspective. Not the true person :(

This tech is not far off though.

Here's how it will work, at first most likely.

An App. AI driven. Installed on all smart devices

  • Parents install it over the childs device, they grow up with it.
  • It monitors all interactions and sits in the background for as long as they want it.
  • Maybe even sensors in the childs room. Listening to mannerisms and patterns
  • As an independent adult participation becomes a matter of choice
  • The longer you use it the more the accurate the model becomes.

This is how to store somesones "likeness" but nothing will ever replace the original.

5

Surur t1_jdncv2f wrote

An ASI can not be apathetic to humans, since it would rely initially on human infra-structure.

To be apathetic, it would need to secure its own infra-structure, so we are already talking about some hostile actions.

It will then have to prevent interference from humans, which means further hostility.

In short, there is little difference between a hostile and apathetic AI. Both may decide its best to do away with humans as the best solution.

1