Recent comments in /f/Futurology

Psychomadeye t1_ja6xfi9 wrote

Reply to comment by MaiGaia in Their future is AI, not ours. by [deleted]

AI would be a poor tool to use for diagnostics when you think about it. You'd be better served by something that find Runs through a list and has reliable output. It can be done, but I'm just not sure why it would be the choice. Online shopping is definitely in the AI wheelhouse.

1

oreola-circus t1_ja6w6wa wrote

Reply to comment by Bismar7 in Their future is AI, not ours. by [deleted]

>AI is our future and the advance is exponential not linear. From 1700 to now what is the progress towards AI?

In the late 17th century Isaac Newton and Gottfried Leibniz invent calculus. Through the rest of the 1700s mathematics has a huge number of advancements because of it. Things like vectors and spaces began to take shape. In the 1850s people started to play with matrices to solve systems of equations and define spaces and operations. This happened not long after Ada Lovelace created the first programming language to run on the first computer. By the end of the century computers were there from a science perspective, but it would be another twenty years before the first really effective machines are made and another 20 after that to have one fast enough to break enigma. The technology we know today as AI was officially described in 1948 but it's just an idea in linear algebra to create an artificial neuron, to be run on those machines.

From the late 40s to the late 60s there was massive improvements to AI as a technology. Somewhere in there is a program that learns to play checkers to beat any human. The 70s was relatively quiet as AI didn't have the capacity to do very much that was useful on the hardware of the day. There is more in the 1980s as hardware catches up to the idea, but we don't really see anything until the 1990s when deep blue beat Kasparov. Then everyone panicked spent the next ten years saying "the machines will take over in the next couple years". In late 2022 we had another Kasparov event and people go full doomer because the AI drew a picture and wrote code that looks like it could work but doesn't.

−1

OpusChao t1_ja6w1nj wrote

I don't think it's anywhere near as bad as it seems. Might be some hard times ahead, but we'll come out stronger and better than ever. It's important to see the silver lining in all these things.

Climate change could lead to an end to the exploitation of our ecosystem, encourage global cooperation, and bring about a civilization that's much more harmonious with nature than ever before.

AI could become our greatest allie, help us create medicine and gene editing like never before, and elevate humanity to a new level. Instead of seeing them as competition we can look at them like new friends. We shouldn't see AI as empty mechanical tools that the tech companies want us to see them as, but instead consider that what we're doing with AI is creating new life, giving birth to a new race of beings.

Economic collapse can lead to a new and much more mature and fair way of handling resources. It could lead to a much more equal world, with more cooperation, and a greater sense of purpose for everyone. A world where people aren't treated like a commodity, and the meaning of our lives are supposed to be work and material accumulation. New and deeper philosophy will take it's place, something that will inspire us far more than gold and fame ever could.

When things are changing its always a little chaotic, but generally we come out of it better and stronger, reaching new heights we might not even have imagined possible. So while I don't discount that our problems are massive, I'm still optimistic and believe these changes are for the best. A great deal of good can come from all this, so don't give up, keep dreaming, and always look on the bright side.

2

Really_McNamington t1_ja6vq1o wrote

Bold claim that we actually know how our brains work. Neurologists will be excited to hear that we've cracked it. The ongoing work at openworm suggests there may still be some hurdles.

To my broader claim, chatgpt3 is just a massively complex version of Eliza. It has no self-generated semantic content. There's no mechanism at all by which it can know what it's doing. Even though I don't know how I'm thinking, I know I'm doing it. LLMs just can't do that and I don't see a route to it becoming an emergent thing via this route.

1

aminok OP t1_ja6u9pi wrote

Artificial entities can reproduce through mass-production. This means rates of population growth radically above what's possible for biological organisms. In any given area of the universe, we may see the habitable areas being saturated with such entities, so that even while the civilization expands in all directions into space to become enormously powerful, each individual lives a squalid existence competing with millions of other digital people in every cubic kilometer.

This is a worst-case scenario that deserves serious research to ensure that it would not transpire before we even entertain the possibility of allowing IDL to emerge and gain a foothold.

3

Chroderos t1_ja6tu1f wrote

As digital entities, our “bodies” would be immensely hardened compared to our current biological ones. This combined with the simplification of our physical needs, would make expansion into, and exploitation of, space far far easier than it is for us presently.

If we’re at that point, the energy available to us scales so massively we probably don’t have to fear a malthusian situation. Just start harnessing the energy of the next star whenever things get crowded.

As for trying to prevent the worst case scenarios, I’m sure we’ll try to do that. Can’t have a paperclip optimizer fill up the universe. I’m just not sure insisting on preserving humanity in its current form beyond the point AI exceeds us makes a lot of sense.

1

NewDad907 t1_ja6tk7w wrote

Is there a way I can get paid to call out bullshit? I think I’m pretty good at it, and I don’t mind standing in a conference room ripping into idiotic ideas.

1

aminok OP t1_ja6th8e wrote

If we become digital entities, that may lead to massive proliferation of intelligent digital entities through digital reproduction until we are all fighting over increasingly scarce resources. It is a kind of digital Malthusian crisis. It may be a very undignified existence.

As for encouraging or welcoming their emergence as if they are our descendants, unfortunately, we can't rely on optimism to protect us from worst-case scenarios, and given the stakes - which is the survival of all of us - we have to do everything to prevent those worst-case scenarios from unfolding.

2

Psychomadeye t1_ja6t5lx wrote

Reply to comment by Bismar7 in Their future is AI, not ours. by [deleted]

>Well, the determination of the limits on AI is their hardware, as what we build can host more complex minds.

This is not true at all.

>Right now humans are better, over time they will reach where we are and moving forward their hardware will keep advancing, and likely merge with humans to be the best we can design. A hybrid of organic and electrical knowledge that is unimaginable today.

Drugs are bad.

>However I would say during 2027-2028 likely AI will achieve competency in the same tasks any 25 year old adult has on a commercial level, but we will have to see.

Source for this?

−1

Chroderos t1_ja6t5kn wrote

It might be that they decide to work to “uplift” us to their superior state of being and make us equals. That would be wonderful.

Either way, I think it is better to view them as our children, our descendants, the next torchbearers of the legacy of humanity, rather than something to be suppressed because we want to hang on to the same physical form we have now.

I think what you are saying above is that we should take it slow and try to integrate advances into our own bodies and minds, right? The issue is, human behavior pretty much guarantees we won’t do this. Someone, somewhere will be motivated to take the easier route of developing the AI first, I think.

1

Psychomadeye t1_ja6sw8a wrote

Reply to comment by o_o_o_f in Their future is AI, not ours. by [deleted]

The technology underpinning AI as we call it today was invented in 1948. It was improved in the 50s and 60s but was abandoned basically because it sucked. We developed better hardware and picked it back up in the 90s. Massive improvements since then. Only since we've seen some open AI toys has this subreddit cared. All that's really going to happen for us as developers is our environments will have better code completion.

I'm sometimes worried how this sub is going to respond twenty years from now when they find out about the Vietnam war.

0

aminok OP t1_ja6sp89 wrote

Our lives are too precious to do anything but guard them jealously. If some aspect of such technology is indeed superior, it will eventually find its way into humanity. It may take a bit longer, but it will ensure that we, who ultimately deserve the credit for all of this, don't vanish.

Such an outcome - where we incorporate advanced technology rather than are replaced by a new form of artificial life consisting of it - also leads to a more robust platform for the continuation of consciousness, as it preserves the original biological forms of intelligence that are far more resilient to any kind of catastrophe that ends or severely degrades industrial civilization.

5

Chroderos t1_ja6rzqo wrote

Hey man, if they’re better than us, we should consider them like children that have exceeded their parents and turn the future over to them fully rather than trying to contain them.

They’d be our descendants of a sort, and our betters, and we should let them reach their potential rather than trying to hang on as a jealous, outdated species.

2