Recent comments in /f/Futurology

South_Cheesecake6316 t1_jecllt2 wrote

We already have neural networks that can create images that roughly resemble what a person is looking at.

I don't think it would entirely be out of the question for people to be able make videos from thought when asked to recall a memory.

However, although the subject and location of these videos would often be correct, I doubt the smaller details in these videos would be at all consistantly accurate or exist at all. I highly doubt that you would be able to scan a person's memory for clues like in some sci-fi movie, because ultimately memory is flawed and peoples brains will fabricate details to fill in the gaps.

Short answer, yes but it won't be a perfect copy of what happened.

1

beingsubmitted t1_jecici1 wrote

The algorithm is barely IP, and the data is the bigger part of it's success.

ChatGPT is a reinforcement learning tuned transformer. The ideas and architecture it's built on aren't proprietary. The specific parameters are, but that's not actually that important. The size and number of layers, for example. Most people in ai can make some assumptions. Probably ReLU, probably Adam, etc. Then there are different knobs you can twiddle and with some trial and error you dial it in.

The size and quality of your training data is way more important, and in the case of chatgpt, so is your compute power. Lots of people can design a system that big, it's as easy as it is to come up with big numbers, but training it takes a ton of compute power, which costs money, which is why just anyone hasn't already done it if it's so easy.

It should also be said that GPT is a bit of a surprise success. Before models this size, it was a big risk. You're gonna spend millions to train a model, and you won't know until it's done how good it will be.

Most advancements in AI are open source and public. Those all help advance the field, but at the same time, it's also about taking a bit of a risk, and waiting to see how it pans out before taking the next risk.

Also, there's transfer learning. If you spend a hundred million training a model, I can use your trained model and a fraction of the money to make my own .

It's like if you laboriously took painstaking measurements to figure out an exact kilogram and craft a 1kg weight. You didn't invent the kilogram, difficult as it was to make it. If I use yours to make my own, I'm not infringing on your IP.

1

Semifreak t1_jeci8px wrote

Indeed.

It's strange how some commenters (here and elsewhere) don't just express doubt about future positive plans targets, but are almost barking at anything else commented that isn't completely negative and attacking.

I don't know if they are just having a bad day and venting randomly or something else odd going on.

Humans gonna human, I guess.

3

Rehk_135 t1_jeci2nu wrote

Humans have a need for fulfillment and meaning. Even if you don't have to work, most people will want to do...something. The alternative is the human equivalent to the rat utopia experiment.

Probably not as bad, but it's very common for people retiring to go through some depression due to feeling a lack of purpose.

What we'd end up doing? Who knows. Sports, volunteering, exploration, tending to nature, religion, spirituality, or learning for the hell of it are all plausible. I'm sure many would slip through the cracks too and end up miserable no matter what.

Good news bad news though... Good news we won't have to worry about this. Bad news is that's cause a dystopian hellscape is far more likely than a scarcity free utopia, imho.

9

manicdee33 t1_jeci1fa wrote

Nah, there's a level in there somewhere where human population is stable and able to continue being creative and inventive, how cute is it when humans think they've discovered a new law of physics? Awww!

If you go higher they end up over-consuming the renewable resources such as fresh water. If you go lower the population ends up getting inbred or just dying off completely.

Also by managing the human population (and a small number of predator species populations outside the human zone of influence) the rest of the ecosystem manages itself quite handily.

Oh, have you seen what we did with Mars and Venus? The Venusian fjords are just chef's kiss.

2

Cerulean_IsFancyBlue t1_jecghpo wrote

So this is interesting. On the one hand, I am very pessimistic, that we are anywhere close to achieving a human intelligence and cognition. I don’t think it possesses intuition or feelings, or any of the things that you might think are necessary for true creativity.

But … this might be a generation of AI that is actually better at creativity than it is at being factual and correct. Language generation has the ability to produce sentences that are plausible and coherent. But without some kind of additional subsystem it’s actually not very good at fact checking. So it’s possible that this tool will be a boost to human creativity by being able to generate tons of alternatives and variations on ideas, and and not a boost to human accuracy or precision like many previous generations of “Thinking Machines” have done.

GPT is less “calculator” and more “crazy friend who spits out inspiring nonsense”. It produces fanciful novel output — made of the things you put into it, rearranged. But it does so in such a powerful way, drawing on such a wealth of examples, that the output can actually feel creative. Usually it’s creative via an existing style, so it’s a derivative sort of creativity, if that’s not an oxymoron.

But anyway. I find it interesting that in terms of how this to boost human abilities, it’s more of a creativity boost.

2

Ansalem1 t1_jecfxan wrote

I agree with pretty much all of that. I've been getting more hopeful lately, for the most part. It really does look like we can get it right. That said, I think we should keep in mind that more than a few actual geniuses have cautioned strongly against the dangers. So, you know.

But I'm on the optimistic side of the fence right now, and yeah if it does go bad it'll absolutely be because of negligence and not inability.

1