Recent comments in /f/Futurology

Major-Cranberry-4206 t1_ja81sdn wrote

There are a lot of things that may or will happen that you cannot do anything about. Focusing on these things will cause you to be depressed, and will keep you that way as long as this is your preoccupation.

The good news is that as you are 15 years old, you gave a lot of great choices you can make that could easily help put you yin a highly satisfying and rewarding future.

Such as getting your education. You said you would like to work in the area of climate change. You might want to consider engineering. It is math intensive, but understanding how things are created, you might invent a better way of doing something that significantly reduces the carbon footprint.

Art produced by AI

This is not a bad thing at all. Not understanding what art is can be terrible, especially for people who produce abstract pieces of work and call it art. Abstract work is not art at all, but the opposite of art. But that’s a discussion for another day.

Under no circumstances should you start having children before you are ready to have them, especially financially. This means if you don’t abstain from heterosexual sex, at a minimum use the best contraception possible. Best yet would be to abstain from having sex altogether until you are married, but even then, make sure you are ready to have children.

Ultimately, your future is subject to the choices YOU make. Wise choices today will result in a rewarding future tomorrow. However bad choices may curse your life for years into the future, and for some people the rest of their lives.

So your future for the most part is up to you and the choices you make, and the actions you take to shape and create it. Do not be just a passenger in life, where you just react to what comes to you. Be proactive. Research your options for a career. Identify your resources to get there. Set your career goals. Devise a plan on how to get there, and execute on your plan.

For the most part, you will make your future what it is, based on the choices you make for your future. You, and no one else is responsible for your future. Keep that in mind.

2

FuturologyBot t1_ja81h4n wrote

The following submission statement was provided by /u/CelebrationDirect209:


A natural language model has jumpstarted the process of protein design by creating active enzymes.

Researchers have developed an AI system that can generate artificial enzymes from scratch. In laboratory experiments, some of these enzymes demonstrated efficacy comparable to natural enzymes, even when their artificially created amino acid sequences greatly deviated from any known natural protein.

The experiment shows that natural language processing, initially created for reading and writing language text, can grasp certain fundamental concepts of biology. The AI program, known as ProGen, was developed by Salesforce Research and employs next-token prediction to construct artificial proteins from amino acid sequences.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11ddib3/limitless_possibilities_ai_technology_generates/ja7x2ut/

1

Steamer61 t1_ja80syg wrote

There has always been someone predicting the end of the world since there have been people. For example:

The Cold War- ~1947-1991. The major concern was nuclear war followed by nuclear winter. Virtually everyone was predicted to die. As an elementary student in the 1960s we had regular "Duck and Cover" drills. Hell, I grew up in the '60s and '70s in upstate NY, 20 miles from a major US Air Force SAC base, a major target. We didn't really focus on this all that much, we weren't all cowering in fear, we just lived life. Looking back, that should have been some seriously scary shit!

In the 1970s there were all sorts of predictions, pollution will krill us all, an Ice Age was coming, etc.

Y2K was supposed to be a major problem in 2000. Planes would crash, Wall Street would crash, power would go out, nuclear reactors would melt down, etc. What a major fizzle that turned out to be!

The current "Climate Change" scare started out as "Global Warming", the name got changed when the global warming wasn't happening as fast as it was predicted. Guess what? Climate changes!! Take a look at Al Gore's "An Inconvenient Truth", the documentary that essentially started the whole Global Warming craze, The predictions were much less than accurate.

I could write pages about failed doomsday predictions but there's really no point, there will always be someone predicting the end of the world. Always!!!

Economy? There have been great times and there have been bad times. It is extremely doubtful that you'll see a total collapse. There could be a market crash, it's happened before but at your age, it'll have little effect. Me, at 61 years old would be pissed since I'd lose a lot of my retirement money in my 401K but it still wouldn't be the end of the world for me.

AI? AI has a long way to go before it will ever go mainstream, if ever. Yeah, the current versions can do some amazing things but most of them are very specialized and cannot do the things humans can do as well as humans do. Yep, they will take some jobs from us, you don't really need a human doing repetitive job on some assembly line. Humans are still king and always will be at creative endeavors.

You could go the Prepper route if you're unable to let go of your fears but it could get expensive.

There's always going to be something to worry about in the future if you focus on it. Life your life in the here and now, don't live your life in fear. Plan for the future for sure but plan for a normal life.

2

Ok-Discussion2246 t1_ja7zxdy wrote

Yep! And it’s going to be incorporated into Project Convergence. It’s actually incredibly fascinating. A few years ago at my old job I was reading up on the concept and some of the tests they did. It’s mind bogglingly terrifying for anyone who’s on the receiving end.

Basically what their goal is, is to use AI to have various weapons system “talk” to each other in the battle space and make an ultra fast decision on the best way to react to any given situation (maximizing threat elimination, minimizing friendly and civilian casualties).

A good way to explain it is a test that they did a ways back.

Systems included in the test: F-35, MQ-9 (drone), one of those Boston dynamics robot dog things (a big one, basically a pack mule with a sensor package), self propelled artillery with smart shells, a platoon of regular soldiers (walking with the robot dog), and some satellites. All using AI to communicate.

I’ll set the scene.

The robot dog and platoon are walking through a battlefield, think a semi-wooded area. There’s a drone watching from above, an F-35 orbiting nearby, and a satellite watching everything.

All of a sudden, there’s enemy contact and incoming fire from behind cover off in the distance. The sensor package on the robot dog picks it up. Immediately, without any human input, this info is shared with the drone, F-35, satellite, and artillery battery. Now the drone and satellite have eyes on the “enemy”. The AI makes a quick calculation about how to deal with it in the best way. In this particular situation, it seems to be artillery. Within 15 seconds of that first shot ringing out, there is an artillery shell in the air and on its way to the enemy location without any human input whatsoever. And not one soldier was put in any physical danger (outside of the original shots being fired at them)

0

Affectionate-Aide422 t1_ja7ynac wrote

Back in the 80s, my friend’s dad wouldn’t let him go into programming because AI was going to kill all programming jobs. My buddy missed out on a huge career in tech. Proceed like the singularity is 50 years away. That’s what I did.

2

Actaeus86 t1_ja7x6x7 wrote

Well it will never happen. What is a sensible law in China is not sensible in Italy, or Brazil. Same thing with ethics and cultural norms. Humans are childish. Half the world will say no to a good law just because X country suggested it. I personally don’t want to live under a universal one world government even if it was possible, which it isn’t.

6

CelebrationDirect209 OP t1_ja7x2ut wrote

A natural language model has jumpstarted the process of protein design by creating active enzymes.

Researchers have developed an AI system that can generate artificial enzymes from scratch. In laboratory experiments, some of these enzymes demonstrated efficacy comparable to natural enzymes, even when their artificially created amino acid sequences greatly deviated from any known natural protein.

The experiment shows that natural language processing, initially created for reading and writing language text, can grasp certain fundamental concepts of biology. The AI program, known as ProGen, was developed by Salesforce Research and employs next-token prediction to construct artificial proteins from amino acid sequences.

15

UniversalMomentum t1_ja7v6sw wrote

I think for that to be practical you need a lot of robotic automation to raise the standard of living and reduce the need for humans to compete against each other just to survive so much. Otherwise what you're saying is pretty much what the UN is already trying to do, but with no where near enough resources to make it happen.

We need to lure the global population into such a plan, so we need something to lure them with and something like robotic automation lowering the cost of all commodities and labor is the most plan I can think up to help reduce greed and give people less reasons to fight each other constantly. Otherwise there is literal constant benefit to screwing each other over and the more desperate your situation the bigger the incentive. Kind of like when we imagine a world where food runs out and law and order falls rapidly with it. That's the kind of wild asset of humanity we are dealing with, so we need a way to stabilize their living conditions so they act more sane and predictable vs desperate and lawless as we commonly see anytime living conditions deteriorate.

3

Tnuvu t1_ja7uw0f wrote

A.I. is coded in our likeness and mentality and trained on the internet which is the pinacle of "our intellect" with all it's downfalls.

Given humanity cannot appreciate humanity, how can we expect our "child" to do what we cannot.

This is why Mo Gawdat mentioned we should make sure A.I. sees also the good in us, before it's too late.

2

superjudgebunny t1_ja7up3e wrote

:/ it’s not that simple. We still use the old factory sense. This is hard to represent digitally and does not translate well logically. How do you program drive? Also called will, the will to live. Pain? These things are all very interconnected, bio signals aren’t simple.

Where do you start? How would you give AI the idea of empathy? You still have to provide an input. WHAT would that be? This is the hard question.

Musk has hinted towards this with the brain implant. We would need an interface that can translate these things. Then your just imprinting the human response, might as well build an organic computer….

The reason it’s so hard is the same reason describing human emotions are difficult. What is love without using love as a description? What’s love vs infatuation? Philosophically speaking, we can not easily do this.

1