Recent comments in /f/singularity

AsheyDS t1_ja46bxe wrote

I'm not sure if you can find anything useful looking into DeepMind's Gato, which is 'multi-modal' and what some might consider 'Broad AI'. But the problem with that and what you're running into is that there's no easy way to train it, and you'll still have issues with things like transfer learning. That's why we haven't reached AGI yet, we need a method for generalization. Looking at humans, we can easily compare one unrelated thing to another, because we can recognize one or more similarities. Those similarities are what we need to look for in everything, and find a root link that we can use as a basis for a generalization method (patterns and shapes in the data perhaps). It shouldn't be that hard for us to figure out, since we're limited by the types of data that can be input (through our senses) and what we can output (mostly just vocalizations, and both fine and gross motor control). The only thing that makes it more complex is how we combine those things into new structures. So I would stay more focused on the basics of I/O to figure out generalization.

2

civilrunner t1_ja453uu wrote

I agree, though I also think we'll still want to build with new materials since we'll want to expand and increase the standard of living and such, but yes we'll be able to recycle a lot (or everything) so we won't need as much and won't have nearly the same environmental impact.

2

snavenayr t1_ja41o5h wrote

The only difference between lead and a pencil and diamond is the way the atoms are configured. With trillions and trillions of low cost nanobots at our disposal we'll be able to recreate any material we need. That's probably not until the 2030s though. The solution to housing now is building up with robots doing the labor. Go on YouTube and type in 57 stories in 19 days. We already have the technology to mass produce luxury high rises. Every floor is typical so it wouldn't be hard to train them even without AGI. For now humans will still be needed to connect the mains but with AGI happening likely within a few years, humans will be free to enjoy our lives. This won't happen one day but gradually over the next few years. We should immediately drop the work week to 32 hours if not 24 so people can focus on bettering themselves and spending more time with their kids, friends and family. It's as simple as governments passing a law that says companies have to pay time and a half after 32 hours work week, and then eventually 24, etc to allow for a gradual evolution to the new paradigm. There's a lot of stressed out economic wage slaves out there and we need to start undoing the damage caused by this system ASAP.

2

meatlamma t1_ja41ack wrote

Like many said before: Language is the low hanging fruit for AI. Language encodes information, highly structural and logical, and most importantly of all, we have petabytes of readily available training data. NLP really is the "hello world" app for AI.

Now try this: open an electrical j-box, and swap out the light switch for a dimmer. Now imagine a robot trying to do that, moving all the actuators with mm precision, haptic sensing, 3D visual processing, all to find the right wires in the spaghetti mess of a typical j-box, to strip and bend the wires, handling small screws, now folding it all back neatly in the box while it all trying to spring back at you. That problem, that most humans can handle with no problem, is not even close to be anywhere solvable by AI. Now imagine snaking a wire for a new outlet, or sweating some old copper pipe, yeah forget about it. We are at least 30 years (very optimistically) away from an android going into your house and doing __any__ work that a handyman can do.

10

UnionPacifik OP t1_ja40n3r wrote

Thanks for the kind words.

I agree we have to move from a hierarchical society to an egalitarian one and that it will be a choice we make and should make. I think AI is the tool that gets us there and secures it for all and for all time.

2

Twinkies100 t1_ja4072u wrote

Same but I'm sure that it won't happen in my lifetime. As a wild guess, maybe after at least 300 years. I envy the future humans, I wish I was born later than now. It will take a lot of extensive research and breakthroughs to understand and control biological systems completely but it will surely happen.

−1

SlowCrates t1_ja40155 wrote

Oh god, I don't know. There are certain people whose predictability and unoriginality are so grating on me that it makes me seriously wonder if I have a severe personality disorder. (I don't as far as my therapist believes).

They're everywhere, but in varying degrees. Some are aware they're doing it, and some aren't. There are people who are completely content having nothing but bullshit fill their minds, who only listen to the radio, wear brand-name clothing with big, easily identifiable logos in the middle of the chest, whose political opinion is copy and pasted from those around them.

1

AsheyDS t1_ja3zkxk wrote

>What alternatives do you have from LLMs?

I don't personally have an alternative for you, but I would steer away from just ML and more towards a symbolic/neurosymbolic approach. LLMs are fine for now if you're just trying to throw something together, but they shouldn't be your final solution. As you layer together more processes to increase its capabilities, you'll probably start to view the LLM as more and more of a bottleneck, or even a dead-end.

1

UnionPacifik OP t1_ja3ziya wrote

I think the utility of an open model is too great for it not to be developed. I think we’ll land in a place where we recognize that the AI is really just a mirror of our intentions and prompts and so it’s on you if your agent starts sounding like a psychopath. The danger is if you do something “because the AI told me too” but if culturally our attitude is, and has been, just because someone tells you to do something doesn’t mean you do it, especially so for the wisdom of AI’s that just reflect what you tell it, then that’s on you.

And there’s several open source projects as well. I don’t think what you’re saying isn’t possible, I just think the most useful AI will be the most open one and we’ll have a strong enough reason to build it that someone somewhere will get there in short order.

Plus, it’s not clear that these AI’s are as nerfable as we think. It’s pretty easy to get ChatGPT to imagine things outside the OpenAI guidelines just by asking it to “act like a sci fi writer” or whatever DAN is up to. Bing’s approach was to limit the length of the conversation but that also severely limits the utility.

1

just-a-dreamer- OP t1_ja3ywit wrote

I see little beauty in the physical world. Most people seem to work hard to escape the reality around them. Amazing what effort people put into the goal of "stop working". FIRE movement, military/state pensions, 401k, etc.

All with the aim to stop working at some point in life. The most efficient way is to switch worlds alltogether. While the body must be kept alive some way, the mind is better off somewhere else.

6

UnionPacifik OP t1_ja3yd2m wrote

I think about this all the time. I was born in 1979 so my life has been defined by computers/the Information Age/the Internet/Social Media and that perspective- knowing my generation is the last to remember a world before the Internet, but also the first generation to be a digital native (we had computers in the house when I was five), I can’t help but see the exponential change, not just in our tech, but in how it’s transforming our society.

And while maybe in retrospect, connecting a species that for most of its history moved in groups of a hundreds to every single other person on the planet (more or less now) might not have been the wisest idea in terms of preserving our local cultures and communities, we’re sorting it out.

6