Recent comments in /f/singularity

Capitaclism t1_ja61i4c wrote

No one here truly knows when it'll be possible, though we can speculate. Some will be more conservative than others, but let's not pretend we can truly see anything past then next 5-10 years when we're advancing st such ridiculous speeds. Some things which seem simple will hit hard snags and take much longer than expected and others which seemed to be hard problems will come easily.

That's the way of life.

2

Molnan t1_ja5zobe wrote

You say:

​

>CAIS also assumes people won’t build generalist agents to start with.

​

No, it doesn't. See, for instance, section 7: "Training agents in human-like environments can provide useful, bounded services":

​

>Training agents on ill-defined human tasks may seem to be in conflict with developing distinct services provided by agents with bounded goals. Perceptions of conflict, however, seem rooted in anthropomorphic intuitions regarding connections between human-like skills and human-like goal structures, and more fundamentally, between learning and competence. These considerations are important to untangle because human-like training is arguably necessary to the achievement of important goals in AI research and applications, including adaptive physical competencies and perhaps general intelligence itself. Although performing safely-bounded tasks by applying skills learned through loosely-supervised exploration appears tractable, human-like world-oriented learning nonetheless brings unique risks.

​

You say:

​

>if you don’t think a LLM can become dangerous you aren’t thinking hard enough.

​

Any AI can be dangerous depending on factors like its training data, architecture and usage context. That said, LLM as currently understood have a well defined way to produce and compare next token candidates, and no intrinsic tendency to improve on this routine by gathering computing resources or any similar instrumental goals, and simply adding more computing power and training data doesn't change that.

Gato and similar systems are interesting but at the end of the day, the architecture behind useful real-world AIs like Tesla's Autopilot is more suggestive of CAIS than of Gato, and flexibility, adaptability and repurposing are achieved through good old abstraction and decoupling of subsystems.

The advantages of generalist agents are derived from transfer learning. But this is no panacea, for instance, in the Gato paper they admit it didn't offer much advantage when it comes to playing Atari games, and it has obvious costs and drawbacks. For one, the training process will tend to be longer, and when something goes wrong you may need to start over from scratch.

And I must say, if I'm trusting an AI to drive my car, I'd actually prefer it if this AI's training data did NOT include videogames like GTA or movies like, say, Death Proof or Christine. In general, for many potential applications it's reassuring to know that the AI simply doesn't know how do certain things, and that's a competitive advantage in terms of popularity and adoption, regardless of performance.

​

You say:

>Narrow agents can also become dangerous on their own because of instrumental convergence

​

Yes, under some circumstances, and conversely, generalist agents can be safe as long as this pesky instrumental convergence and other dangerous traits are avoided.

There's a lot more to CAIS than "narrow good, generalist bad". In fact, many of Drexler's most compelling arguments have nothing to do with specialist Vs generalist AI. For instance, see section 6: "A system of AI services is not equivalent to a utility maximizing agent", or section 25: "Optimized advice need not be optimized to induce its acceptance".

0

sachos345 t1_ja5youz wrote

> Here is the issue: when every single person on planet earth can be a game developer it's like saying everyone can have their own podcast on Youtube.

I get your point but my counter argument is how many of those people will actually want to make a game, and how many of those have interesting things to say/are good enough gameplay designers. Unless we are talking about an AI capable of designing games too, then we are fucked.

2

DarkCeldori t1_ja5y7j4 wrote

A tree builds atomically precise far more advanced solar energy collectors and pipes on site and can stand for thousands of years. But it is limited by evolution. The tree could be 100x stronger than steel and taller than the tallest building with unevolvable nanostructured carbon. But it is unevolvable but not undesignable, we humans can engineer biology far past the limits of evolution.

1

turnip_burrito t1_ja5y3hb wrote

I agree with all of this, but just to be a bit over-pedantic on one bit:

> Models cant speak or hear when they want to Its just not part of their programming.

As you said it's not part of their programming, in today's models. In general though, it wouldn't be too difficult to construct a new model that judges at each timestep based on both external stimuli and internal hidden states when to speak/interrupt or listen intently. Actually at first glance such a thing sounds trivial.

1

NoidoDev t1_ja5xedu wrote

> the biggest funder of VR games

Mobile VR games? Not running on a PC, if I'm informed correctly.

>VR games don't make any profit, so nobody wants to develop for it.

You could have normal games adjusted for VR.

>that's entirely why Facebook has to fill the void in the first place

How nice of them. I thought it's because they wanted their own "App Store" with devices and squash any possible competition for their own ecosystem.

>You have absolutely no idea what you're talking about.

I do have some insight and it's a matter of judgement which side I believe.

1

Deightine t1_ja5wys0 wrote

People are already doing it right now with anything the FDA isn't regulating. Huge market in adjusting your micro nutrients and amino acids, for example. Some few people hitting themselves with modifications using crispr.

And before the regulations over the pharma industry between 1900-1930, a lot of people were doing some really ethically questionable basement level home science, while snake oil salesman were out selling poisonously doctored cherry juice in the streets. Basically, the scam homeopathy of the 1800s.

One of the fascinating but potentially horrifying elements of singularity is that inevitably, regulation will fall behind advancement, and self-experiment will be one of the few ethical high turnaround human testing models, as it eliminates the coercion ethical concerns. This is one of the reasons that the organs on chips and such are being developed at the moment--cuts out the live animal testing models.

I am a huge fan of the AI premodeling advances right now.

7