Recent comments in /f/singularity
Capitaclism t1_ja61i4c wrote
Reply to comment by iNstein in Sam Altmans, Moores law on everything - housing by Pug124635
No one here truly knows when it'll be possible, though we can speculate. Some will be more conservative than others, but let's not pretend we can truly see anything past then next 5-10 years when we're advancing st such ridiculous speeds. Some things which seem simple will hit hard snags and take much longer than expected and others which seemed to be hard problems will come easily.
That's the way of life.
katiecharm t1_ja605k0 wrote
Reply to comment by Akashictruth in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
The idea that ChatGPT can run slowly on a 4090 is shockingly amazing still.
katiecharm t1_ja602o7 wrote
Reply to comment by visarga in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
Hell humans already need AI protection from other AI. We need a web surfing companion that can easily mark accounts that are likely to be foreign influence farms and fury-generating bots. We need disinfo pointed out, and perhaps even help crafting our message so it comes across in the best way.
Akimbo333 t1_ja5ztb9 wrote
Reply to comment by NoidoDev in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
Oh ok
Molnan t1_ja5zobe wrote
Reply to comment by Present_Finance8707 in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
You say:
​
>CAIS also assumes people won’t build generalist agents to start with.
​
No, it doesn't. See, for instance, section 7: "Training agents in human-like environments can provide useful, bounded services":
​
>Training agents on ill-defined human tasks may seem to be in conflict with developing distinct services provided by agents with bounded goals. Perceptions of conflict, however, seem rooted in anthropomorphic intuitions regarding connections between human-like skills and human-like goal structures, and more fundamentally, between learning and competence. These considerations are important to untangle because human-like training is arguably necessary to the achievement of important goals in AI research and applications, including adaptive physical competencies and perhaps general intelligence itself. Although performing safely-bounded tasks by applying skills learned through loosely-supervised exploration appears tractable, human-like world-oriented learning nonetheless brings unique risks.
​
You say:
​
>if you don’t think a LLM can become dangerous you aren’t thinking hard enough.
​
Any AI can be dangerous depending on factors like its training data, architecture and usage context. That said, LLM as currently understood have a well defined way to produce and compare next token candidates, and no intrinsic tendency to improve on this routine by gathering computing resources or any similar instrumental goals, and simply adding more computing power and training data doesn't change that.
Gato and similar systems are interesting but at the end of the day, the architecture behind useful real-world AIs like Tesla's Autopilot is more suggestive of CAIS than of Gato, and flexibility, adaptability and repurposing are achieved through good old abstraction and decoupling of subsystems.
The advantages of generalist agents are derived from transfer learning. But this is no panacea, for instance, in the Gato paper they admit it didn't offer much advantage when it comes to playing Atari games, and it has obvious costs and drawbacks. For one, the training process will tend to be longer, and when something goes wrong you may need to start over from scratch.
And I must say, if I'm trusting an AI to drive my car, I'd actually prefer it if this AI's training data did NOT include videogames like GTA or movies like, say, Death Proof or Christine. In general, for many potential applications it's reassuring to know that the AI simply doesn't know how do certain things, and that's a competitive advantage in terms of popularity and adoption, regardless of performance.
​
You say:
>Narrow agents can also become dangerous on their own because of instrumental convergence
​
Yes, under some circumstances, and conversely, generalist agents can be safe as long as this pesky instrumental convergence and other dangerous traits are avoided.
There's a lot more to CAIS than "narrow good, generalist bad". In fact, many of Drexler's most compelling arguments have nothing to do with specialist Vs generalist AI. For instance, see section 6: "A system of AI services is not equivalent to a utility maximizing agent", or section 25: "Optimized advice need not be optimized to induce its acceptance".
katiecharm t1_ja5z6ao wrote
Reply to comment by AylaDoesntLikeYou in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
You best start believing in singularities missy; you’re in one.
DarkCeldori t1_ja5z38w wrote
Reply to comment by visarga in An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
Actually nanomachines can effectively get materials out of thin air by virtue of perfect recycling of waste.
NoidoDev t1_ja5ysea wrote
Reply to comment by Akimbo333 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
I meant the hope for getting access to it, running it at home.
sachos345 t1_ja5youz wrote
Reply to comment by spiritus_dei in Weird feeling about AI, need find ig somebody has same feeling by polda604
> Here is the issue: when every single person on planet earth can be a game developer it's like saying everyone can have their own podcast on Youtube.
I get your point but my counter argument is how many of those people will actually want to make a game, and how many of those have interesting things to say/are good enough gameplay designers. Unless we are talking about an AI capable of designing games too, then we are fucked.
DarkCeldori t1_ja5ylam wrote
Reply to comment by Puzzleheaded_Pop_743 in The 2030s are going to be wild by UnionPacifik
All time high compared to the past but insignificant compared to the future where infinitely more will live throughout the universe.
sachos345 t1_ja5y7u9 wrote
Reply to comment by polda604 in Weird feeling about AI, need find ig somebody has same feeling by polda604
I agree with u/gantork i was about to comment the same, as an indie dev is great that all these tools are coming out. AA quality games in the future made by one solo dev may be possible.
DarkCeldori t1_ja5y7j4 wrote
Reply to comment by shmoculus in Sam Altmans, Moores law on everything - housing by Pug124635
A tree builds atomically precise far more advanced solar energy collectors and pipes on site and can stand for thousands of years. But it is limited by evolution. The tree could be 100x stronger than steel and taller than the tallest building with unevolvable nanostructured carbon. But it is unevolvable but not undesignable, we humans can engineer biology far past the limits of evolution.
turnip_burrito t1_ja5y3hb wrote
Reply to comment by Nervous-Newt848 in Is multi-modal language model already AGI? by Ok-Variety-8135
I agree with all of this, but just to be a bit over-pedantic on one bit:
> Models cant speak or hear when they want to Its just not part of their programming.
As you said it's not part of their programming, in today's models. In general though, it wouldn't be too difficult to construct a new model that judges at each timestep based on both external stimuli and internal hidden states when to speak/interrupt or listen intently. Actually at first glance such a thing sounds trivial.
SecretAgendaMan t1_ja5y0a7 wrote
Reply to comment by lorimar in Man successfully performs gene therapy on himself to cure his lactose intolerance by [deleted]
See, it's attitude like this that my Spanish friend Bob felt the need to put on a cheap elastic prosthetic at the end of his foot. Poor Roberto!
NoidoDev t1_ja5xedu wrote
Reply to comment by TeamPupNSudz in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
> the biggest funder of VR games
Mobile VR games? Not running on a PC, if I'm informed correctly.
>VR games don't make any profit, so nobody wants to develop for it.
You could have normal games adjusted for VR.
>that's entirely why Facebook has to fill the void in the first place
How nice of them. I thought it's because they wanted their own "App Store" with devices and squash any possible competition for their own ecosystem.
>You have absolutely no idea what you're talking about.
I do have some insight and it's a matter of judgement which side I believe.
IcebergSlimFast t1_ja5xbp7 wrote
Reply to comment by IluvBsissa in Sam Altmans, Moores law on everything - housing by Pug124635
It’s literally cool, which will help in an increasingly hot climate.
[deleted] t1_ja5xap5 wrote
DarkCeldori t1_ja5x5s0 wrote
Reply to comment by No_Ninja3309_NoNoYes in Sam Altmans, Moores law on everything - housing by Pug124635
The cortex is where general intelligence lay and that has about 60 trillion synapses of which only like 1 to 2% are active at any moment. Inactive synapses need not be simulated.
SpecialMembership t1_ja5x420 wrote
You need agi to replace programmers. according to prophet kurzweil its happening in 2029. now go back to coding.
Deightine t1_ja5wys0 wrote
Reply to comment by FellatioWanger3000 in Man successfully performs gene therapy on himself to cure his lactose intolerance by [deleted]
People are already doing it right now with anything the FDA isn't regulating. Huge market in adjusting your micro nutrients and amino acids, for example. Some few people hitting themselves with modifications using crispr.
And before the regulations over the pharma industry between 1900-1930, a lot of people were doing some really ethically questionable basement level home science, while snake oil salesman were out selling poisonously doctored cherry juice in the streets. Basically, the scam homeopathy of the 1800s.
One of the fascinating but potentially horrifying elements of singularity is that inevitably, regulation will fall behind advancement, and self-experiment will be one of the few ethical high turnaround human testing models, as it eliminates the coercion ethical concerns. This is one of the reasons that the organs on chips and such are being developed at the moment--cuts out the live animal testing models.
I am a huge fan of the AI premodeling advances right now.
DreaminDemon177 t1_ja5whvy wrote
Reply to comment by challengethegods in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
Pretty much yeah...
DarkCeldori t1_ja5whou wrote
Reply to comment by Pug124635 in Sam Altmans, Moores law on everything - housing by Pug124635
What youre forgetting is two things asi and nanomachines.
ccnmncc t1_ja5w6wd wrote
Reply to comment by H0sh1z0r4 in The 2030s are going to be wild by UnionPacifik
People will still have relationships and intimacy, just not exclusively in the traditional sense. Fewer people marrying and procreating does not mean less joy overall.
DarkCeldori t1_ja5w3mc wrote
Reply to comment by Pug124635 in Sam Altmans, Moores law on everything - housing by Pug124635
Eventually ai will be doling the permissions and the road and sewers will be made by nanomachines growing and self repairing at no cost.
-emanresUesoohC- t1_ja624yw wrote
Reply to comment by DarkCeldori in The 2030s are going to be wild by UnionPacifik
Maybe? :)