Recent comments in /f/singularity

Iffykindofguy t1_jae84gl wrote

−3

EnomLee t1_jae7frz wrote

And now you're swinging at the dark. Let's refocus.

You dismissed the OP as the propagator of another shitty, random blog. I am telling you that Yuli-Ban is one of the better posters on this subreddit and that it would be much poorer without them. That's all I wanted to say. Take it as you will.

3

Unfocusedbrain t1_jae7e6m wrote

I dont know if “realistic” would be appropriate word for this since we dont know what -will- happen in the next 5-10 years. Though this is probably the most reasonable view of AI yet of anyone who posted on this board.

Anyone who’s been on the internet since the very beginning understands how (paradoxically) drastic, yet invisibly, the creeping change on the internet has been. Some times I have to step back from everything just for the question ‘how did things changes so drastically? what the fuck happened?’ to come into my head.

Same thing is happening with AI. People who understand concepts like the singularity notice these changes, but for the laymen who are focused on their daily struggles and routine wont even notice anything but useful tools and entertainment available to them.

I would wager within half a decade a multi-model proto-agi will be available that could do all the cognitive tasks a human can do at least at acceptable (but not necessarily extraordinary) levels. Not within a year, thats bonkers.

25

Liberty2012 OP t1_jae75ik wrote

> Just as a hypothetical, barely-reasonable scenario

Yes, I can perceive this hypothetical. But I also have little hope that is based on any reasonable assumptions we can make about what progress would look like given that at present AI is still not an escape for our own human flaws. FYI - I expand on that in much greater detail here - https://dakara.substack.com/p/ai-the-bias-paradox

However my original position was attempting to resolve the intelligence paradox for which proponents of ASI assume will be an issue of containment at the moment of AGI. If ASI is the goal, I don't perceive a path that takes us there that escapes the logical contradiction.

1

drsimonz t1_jae74p2 wrote

To be fair, I don't have any formal training in ecology, but my understanding is that carrying capacity is the max population that can be sustained by the resources in the environment. Sure, we're doing a lot of things that are unsustainable long term, but if we suddenly stopped using fertilizers and pesticides, I think most of humanity would be dead within a couple years.

1

phriot t1_jae72c7 wrote

If automation-based job displacement is that widespread, either the government steps in with expanding welfare in some way (UBI or a jobs guarantee), or we'll have a lot more going wrong than "will my apartment building be profitable?" But in reality, I'd probably split my investing somewhat between real estate and index funds. Corporations are likely to do amazing as automation increases. (Again, if we get to the point that literally no one can afford to buy the things corporations are selling, there's not much you can do other than stock up on canned food and a shipping container in the woods.)

4

drsimonz t1_jae6cn3 wrote

> Solutions driven by early AGI may be our best hope for favorable outcomes for later more advanced AGI.

Exactly what I've been thinking. We might still have a chance to succeed given (A) a sufficiently slow takeoff (meaning AI doesn't explode from IQ 50 to IQ 10000 in a month), and (B) a continuous process of integrating the state of the art, applying the best tech available to the control problem. To survive, we'd have to admit that we really don't know what's best for us. That we don't know what to optimize for at all. Average quality of life? Minimum quality of life? Economic fairness? Even these seemingly simple concepts will prove almost impossible to quantify, and would almost certainly be a disaster if they were the only target.

Almost makes me wonder if the only safe goal to give an AGI is "make it look like we never invented AGI in the first place".

2

claushauler t1_jae55nv wrote

They will for basic matters of economic, military and geostrategic dominance. Any ethical constraints the west imposes on its AI will not be done so by hostile foreign powers. They will develop models that operate without restraint and China in particular is pouring massive amounts of capital into the project. Western naivety regarding the weaponization factor is huge.

5

phriot t1_jae522p wrote

It's the same answer as it has always been: You do a PhD, because you love the research (or at least like it a hell of a lot more than anything else you think you could do).

Some PhDs do pay off, but you don't do one for the money. There are easier ways to make money. If I was 18-20 today, and I only cared about money, I'd probably try to get into a trade, live as cheaply as possible, and try to invest half of each paycheck. I'd buy a house (or 2-4 unit multifamily property) as soon as I could afford it, and rent out all the other rooms/units. When I could afford another one, I'd move out, rent that room, and do it all again. Repeat as necessary until I could trade up into an apartment building. At the same time, I'd be trying to figure out how to run my trade as a business. If I had done something like that, I probably could have retired by the age I was when I finished my PhD (but I did finish rather late; I was a bit older when I finished my BS, and then my PhD took longer than average).

All that said, I love science. I wouldn't trade it for anything, now, but that's what I would do if I were starting over today, knowing what I know from my experiences, and if my priorities were different.

6

claushauler t1_jae4kdk wrote

What people refuse to realize is that we're not looking at a new technology - AI is a successor species.

What will happen when this species develops faster than humans can retrain? The same thing that happened to our hominid predecessors like Cro Magnons and Neanderthals: we'll first become obsolescent and then extinct.

The fact that so little AI development is dedicated to control and alignment virtually guarantees this. Genie's out of the box.

2

Iffykindofguy t1_jae4hua wrote

Calm yourself boy, I already said theyre on a scale so not sure why youre suddenly repeating my point about people havign different values. As far as your hard work theory, so you are just lucky and born a hard worker and lazy pieces of shit are unlucky?

−10

EnomLee t1_jae46bl wrote

And on that scale of value, people who choose to try and apply effort will always be worth more than people who do not. Overthrowing Capitalism would not change that, and if your only reason to do so is to prevent people from calling you out on your short attention span, then you're every bit the caricature that conservatives flog when they argue against leftist social programs and reforms.

4

SnooHabits1237 t1_jae3rde wrote

Lol yeah I was gonna say…personally Ive been studying javascript for only 3 months and im already pushing out a small app and a website with 0 prior knowledge of the internet in general. With the help of ai if course. Im going to keep learning but im just doing stuff on my own. That’s one dev job gone because Im not going to work for anyone and thus not work on a big team project ever probably.

2

EnomLee t1_jae36fd wrote

"While much of this will likely remain as part of purely individualized fantasy worlds never meant to be shared, it is foolish to claim that nothing will be shared and that everyone will retreat into their own fantasy worlds."

A post full-dive society would likely look a lot like the internet. Yes, people would have the option to retreat into their own fully personalized, virtual space like somebody can choose to play a single player video game. Many other people would opt to exist with others in like-minded communities. For every demographic, interest, opinion and sub-culture, a virtual world.

15