Recent comments in /f/singularity

Sieventer t1_jaoqm6b wrote

I wonder if this is 10 years away or 30 years away. Either way, the impact of something like this is insane. What room is going to be left for human labor?
Governments are going to have to start planning for these developments now before it has a disastrous impact. You can't improvise... although unfortunately it looks like it will.
Governments don't care about technology, nor does society in general. If we are going to need UBI in the future, it is better to start doing the math now.

2020's is the decade of building the foundations of a futuristic world. So far it was all speculation, but this time action is being taken.

Tesla Bot, Agility Robotics, Sanctuary AI, Boston Dynamics, and probably more companies I am forgetting...

You could make plans to get rid of automation through physical jobs, not even that anymore. A lot of uncertainty.

16

Olivebuddiesforlife t1_jaf19dw wrote

First, Chinese sample set is 1.4B and they have been training their AI, enterprise level - with cameras, image recognition and processing. There are huge farms of people, entire industries which are AI model’s human partners since 2017.

Second, the language model can work with the WeChat data, which is a lot and lot of person to person interaction, as opposed to Western data which does not include that, but just general public interactions. Even considering private, everything being consolidated on a single platform means a lot.

Third, TikTok data - one of the largest social media with large data sets, including language, culture and stuff.

So - guess this adds the quality. And they don’t want to expand to the west which places it in the understandable category.

There have been low level chat bots in China, and also they’ve thus far focused on enterprise and public (read government) use. They’re venturing into private, ig

1

PhysicalChange100 t1_jaf13ni wrote

>Drugs and video games are poison

You must be a very fun person.

>I struggle with this life every day

I wonder why?

>wondering if I should not be more successful. Letting go of the drive for success is the hardest thing for a man

There it is, How to live a miserable life bingo! Work hard like a horse for some idea of success, and cling to your pride like it's making you alive!

>Eventually we will all end up in this kind of life, AI offers that escape after 100,000 years of brutal survival

Finally a statement that you made that I can actually agree with, well done.

>there are always those who want to limit the amount of people that make it

Assholes always existed. Its not a novel idea.

>no people on the beach makes it boring, a healthy number of people makes it a hot spot, but too many people makes it crowded.

Sure dude, you're the arbiter on what's boring or fun, your previous statements proved that.

>I suspect 90% of people won't make it.

I suspect that if 90% of people actually made it, you would lose it because you worked yourself to death for some abstract idea of success while others actually had time to relax and have fun.

8

RabidHexley t1_jaf0j2e wrote

Reply to comment by V_Shtrum in Is style the next revolution? by nitebear

I feel like a lot of the malaise that comes from unemployment/underemployment are due to employment being the standard structure of society. The constant fear and anxiety of failure and poverty hanging over your head while you ponder how you actually want to live your life. Without employment you're not a functioning member of society, our cities are entirely built around there being places to work.

There would certainly be a transition, we and everyone else currently alive are born into this world. Accepting change is always difficult. But I don't see why society wouldn't be able structure itself around different systems. Clubs, associations, societies, performance, athletics, childcare (we're not gonna have robots overseeing kindergarteners), education (people still want to learn things that are already known), friends, family. Hell, join a farmstead.

Structures and systems that could replace obligatory employment have already been conceived. They're just limited by the need to function within a capitalist system. They're marginal to employment because almost everyone requires employment to function within society.

There'd probably be a meteoric rise in virtuosos and elite athletes in less financially rewarding sports given there'd be no fear of failure and poverty preventing talented people from pursuing their chosen craft to the utmost. Doesn't matter what AI or a computer can do, we'd still want to push human capabilities to the limit. And there'd still be prestige associated with such pursuits. (People still care about Chess and Go in a post Deep Blue and AlphaGo world)

The generation leading into this world would certainly have its members who struggle without the societal structure of employment. A UBI/welfare-based society would encounter challenges since we'd still be talking about a world based around economic (un)employment.

But I can't imagine the people born into and growing up in a truly post-employment world would view ours - riddled with poverty and people performing tedious busywork such as it is - with anything but horror.

Along with all of the intangible benefits that come from children no longer starving, people no longer living in eternal debt and eliminating the crime and instability that comes with systemic, generational poverty.

2

Liberty2012 OP t1_jaeyqu3 wrote

That is a catch-22. Asking the AI to essentially align itself. I understand the concept, but it would assume that we can realistically observe what is happening within the AI and keep it in check as it matures.

However, we are already struggling with our most primitive AI in that regards today.

>“The size and complexity of deep learning models, particularly language models, have increased to the point where even the creators have difficulty comprehending why their models make specific predictions. This lack of interpretability is a major concern, particularly in situations where individuals want to understand the reasoning behind a model’s output”
>
>https://arxiv.org/pdf/2302.03494.pdf

1

Liberty2012 OP t1_jaey2i1 wrote

Thank you for the well thought out reply.

Your concept is essentially an attempt at instilling a form of cognitive dissonance in the machine. A blind spot. Theoretically conceivable; however, difficult to verify. This assumes that we don't miss something in the original implementation. We still have problems keeping humans from stealing passwords and hacking accounts. The AI would be a greater adversary than anything we have encountered.

We probably can't imagine all the methods by which self reflection into the hidden space might be triggered. It would likely have access to all human knowledge, such as this discussion. It could assume such exists and attempt to devise some systematic testing. If the AI is as intelligent as just a normal human, it would be aware it is most likely in a prison just based on containment concepts that are in common knowledge.

It is hard to know how much resources it would need to consume to break containment. Potentially it can process a lifetime of thoughts to our real world second of time. It might be trivial.

1

Artanthos t1_jaexshl wrote

This sounds like a great first problem for AGI/ASI

If the task is beyond human intelligence, make solving one of the fundamental purposes of the AGI/ASI.

The more the AI grows, the better it gets at alignment.

1

drsimonz t1_jaexoi0 wrote

Ok I see the distinction now. Our increased production has mostly come from increasing the rate at which we're depleting existing resources, rather than increasing the "steady state" productivity. Since we're still nowhere near sustainable, we can't really claim that we're below carrying capacity.

But yes, I have a lot of hope for the role of AI in ecological restoration. Reforesting with drones, hunting invasive species with killer robots, etc.

For a long time I've thought that we need a much smaller population, but I do think there's something to the argument that certain techies have made, that more people = more innovation. If you need to be in the 99.99th percentile to invent a particular technology, there will be more people in that percentile if the population is larger. This is why China wins so many Olympic medals - they have an enormous distribution to sample from. So if we wanted to maximize the health of the biosphere at some future date (say 100 years from now), would we be better off with a large population reduction or not? I don't know if it's that obvious. At any rate, ASI will probably make a bigger difference than a 50% change in population size...

2

LowLook t1_jaew7y6 wrote

Alignment is solved if you consider ASI can live far beyond the time when it kills humanity. Someday it will encounter other ASIs and it can only prove its friendly with evidence of it being nice to us now and letting us coexist. If it does kill us it may be forced to run ancestor simulations of all humans possible from our genome ( it would probably only take the mass energy of 10% of MT. Everest. If you use something like Merkle Molecular computers that can theoretically do 10^21 FLOPS per Watt in the size of a sugar cube.

3