Recent comments in /f/Futurology
idranh t1_jcdvy3e wrote
Reply to comment by izumi3682 in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
In his seminal 1993 essay, The Coming Technological Singularity, Vernor Vinge writes, "Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030." Vinge may have been right all along.
FuturologyBot t1_jcdvnw5 wrote
The following submission statement was provided by /u/izumi3682:
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
The opening of this article tells you everything you need to know.
>In 2018, Sundar Pichai, the chief executive of Google — and not one of the tech executives known for overstatement — said, “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”
>Try to live, for a few minutes, in the possibility that he’s right. There is no more profound human bias than the expectation that tomorrow will be like today. It is a powerful heuristic tool because it is almost always correct. Tomorrow probably will be like today. Next year probably will be like this year. But cast your gaze 10 or 20 years out. Typically, that has been possible in human history. I don’t think it is now.
>Artificial intelligence is a loose term, and I mean it loosely. I am describing not the soul of intelligence, but the texture of a world populated by ChatGPT-like programs that feel to us as though they were intelligent, and that shape or govern much of our lives. Such systems are, to a large extent, already here. But what’s coming will make them look like toys. What is hardest to appreciate in A.I. is the improvement curve.
>“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
I constantly re-iterate; The "technological singularity" (TS) is going to occur as early as the year 2027 or as late as the year 2031. But you know what? Even I could be off by as many as 3 years too late. The TS could occur in 2025. But I just don't feel comfortable saying as early as 2025. That is the person of today's world in me, that thinks even as soon as 2027 is sort of pushing it. It's just too incredible for me even. I say 2027 because I tend to rely on what I call the accelerating change "fudge factor" that is how Raymond Kurzweil came to the conclusion in the year 2005 that the TS would occur in the year 2045. He knows now that his prediction was wildly too conservative. He too now acknowledges that the TS is probably going to occur around the year 2029.
I put it like this in a very interesting dialogue with someone who we have argued what and by what timeline was coming for almost the last 7 years I believe. Now he is a believer.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11shevz/this_changes_everything_by_ezra_kleinthe_new_york/jcdrt9v/
Codydw12 t1_jcdswpf wrote
> In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.
> I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?
> We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.
> I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.
> A tempting thought, at this moment, might be: These people are nuts. That has often been my response. Perhaps being too close to this technology leads to a loss of perspective. This was true among cryptocurrency enthusiasts in recent years. The claims they made about how blockchains would revolutionize everything from money to governance to trust to dating never made much sense. But they were believed most fervently by those closest to the code.
So throw it all in the trash? Stop fighting demons? Or is it worth it to take a risk that we might burn out in an attempt to create technologies that progress to the point of immense benefit? This just reads like fearmongering.
I do not see AI as some cure all nor do I believe it will completely replace humanity as some on here seem to believe, but I do believe that a lot of the benefits that could come from it are worth it.
> Could A.I. put millions out of work? Automation already has, again and again. Could it help terrorists or antagonistic states develop lethal weapons and crippling cyberattacks? These systems will already offer guidance on building biological weapons if you ask them cleverly enough. Could it end up controlling critical social processes or public infrastructure in ways we don’t understand and may not like? A.I. is already being used for predictive policing and judicial sentencing.
Again, fearmognering. Automation and job loss is a constant fear. Terrorists and bad actors are always feared to get nukes yet none have to date. The predictions can also help eliminate disease and help crime prevention by helping those in need who are often the most predisposed to commit crime.
izumi3682 OP t1_jcdrt9v wrote
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
The opening of this article tells you everything you need to know.
>In 2018, Sundar Pichai, the chief executive of Google — and not one of the tech executives known for overstatement — said, “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”
>Try to live, for a few minutes, in the possibility that he’s right. There is no more profound human bias than the expectation that tomorrow will be like today. It is a powerful heuristic tool because it is almost always correct. Tomorrow probably will be like today. Next year probably will be like this year. But cast your gaze 10 or 20 years out. Typically, that has been possible in human history. I don’t think it is now.
>Artificial intelligence is a loose term, and I mean it loosely. I am describing not the soul of intelligence, but the texture of a world populated by ChatGPT-like programs that feel to us as though they were intelligent, and that shape or govern much of our lives. Such systems are, to a large extent, already here. But what’s coming will make them look like toys. What is hardest to appreciate in A.I. is the improvement curve.
>“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
I constantly re-iterate; The "technological singularity" (TS) is going to occur as early as the year 2027 or as late as the year 2031. But you know what? Even I could be off by as many as 3 years too late. The TS could occur in 2025. But I just don't feel comfortable saying as early as 2025. That is the person of today's world in me, that thinks even as soon as 2027 is sort of pushing it. It's just too incredible for me even. I say 2027 because I tend to rely on what I call the accelerating change "fudge factor" that is how Raymond Kurzweil came to the conclusion in the year 2005 that the TS would occur in the year 2045. He knows now that his prediction was wildly too conservative. He too now acknowledges that the TS is probably going to occur around the year 2029.
I put it like this in a very interesting dialogue with someone who we have argued what and by what timeline was coming for almost the last 7 years I believe. Now he is a believer.
lazyeyepsycho t1_jcdmvjj wrote
Reply to comment by Gigazwiebel in IVO Ltd. to Launch Quantum Drive Pure Electric Satellite Thruster into Orbit on SpaceX Transporter 8 with partner Rogue Space Systems by ComfortableIntern218
its an expensive hoax though certainly
full commitment
zenzukai t1_jcdc4iv wrote
Reply to comment by Shadowkiller00 in What are some jobs that AI cannot take? by Draconic_Flame
Because every story needs conflict, and most stories about AI are about conflicts with AI.
the-real-macs t1_jcdaoqq wrote
Reply to comment by Cdn_citizen in What are some jobs that AI cannot take? by Draconic_Flame
Okay, yeah, that's what I thought, you don't have the faintest beginner's knowledge about how it's actually accomplished. Should've known when you were implying that the concept held any relevance to the behavior of a neural network, but I thought I'd make sure.
Shadowkiller00 t1_jcda64e wrote
Reply to comment by zenzukai in What are some jobs that AI cannot take? by Draconic_Flame
And that is the basis of all the AI take over the world and murder everyone for the good of humanity stories.
Shadowkiller00 t1_jcd9ten wrote
Reply to comment by bound4mexico in What are some jobs that AI cannot take? by Draconic_Flame
>What wish is that?
I'm expressing a desire to have aliens come judge us as a species. That's it. That's all I'm saying. I can't be wrong about it because it's a wish and a silly one at that. You can argue all day that we can and should do it ourselves, that my desire doesn't make sense because that isn't the way ethics work, that my definitions are wrong, but that has nothing to do with anything I'm saying.
I literally don't have to read a single other word you said because everything you are saying is irrelevant.
muther22 t1_jcd99qa wrote
Reply to comment by eyeteabee-Studio in What are some jobs that AI cannot take? by Draconic_Flame
One of the first examples of a chatbot (ELIZA) does exactly this, from the 60s.
zenzukai t1_jcd8vcz wrote
Reply to comment by Shadowkiller00 in What are some jobs that AI cannot take? by Draconic_Flame
Honestly it's pretty easy.
1 - don't damage people
2 - prevent damage to people
3 - always assume people are dumb and unethical
zenzukai t1_jcd7wvp wrote
Reply to comment by random_dollar in What are some jobs that AI cannot take? by Draconic_Flame
How long really though. The general trend is that data driven occupations are going to be first to go. The guy who refurbishes robots will be employed much longer.
zenzukai t1_jcd6qml wrote
Honestly I think therapist is one of the first on the chopping block. I think automation is the ONLY way to promote better habits effectively.
You'd be better off as sex worker. They'll still be cheaper than a sex-bot for awhile.
Svarog1984 t1_jcd52jx wrote
Paradoxically, the "oldest job" is the most immune to the newest technologies.
dgj212 t1_jcd3qz0 wrote
Reply to What can a ChatGPT developed by a well-funded intelligence agency such as the NSA be used for? Should we be concerned? by yoaviram
Honestly, it's not the agencies you have to worry about, it's the corporations with aggressive marketing tactics, and even unethical ones like slandering their competitors
CrelbowMannschaft t1_jccwtjy wrote
Reply to comment by Kaz_55 in 'Highly Maneuverable' UFOs Defy All Physics, Says Government Study by Gari_305
> But basically all of these reports are based on "edge of observability" cases.
Do you have a source for that? I haven't seen that phrase or others like it widely used in the official reporting.
>Not to mention that as far as I am aware most cases aren't even down to sensor data but simply observers.
Then your information is severely lacking.
Independent_Canary89 t1_jccr1wj wrote
Anything physically taxing. Think most blue-collar work. Pretty much anything requiring a computer, or requiring creativity will be automated.
As it stands humans are really only good for physical labor. I think there's irony that in an age of advanced technology what matters the most about a person is their ability to do manual labor.
minterbartolo t1_jccqhlr wrote
Reply to comment by Cdn_citizen in What are some jobs that AI cannot take? by Draconic_Flame
It is a quiet week at JSC cause of spring break for the kids so work load is light
Cdn_citizen t1_jccpvvb wrote
Reply to comment by minterbartolo in What are some jobs that AI cannot take? by Draconic_Flame
Nice try but my friends who are actual rocket scientists don’t have time in their day jobs to waste on Reddit. They’re busying trying to get to Mars.
Who’s moving goals posts now? We are talking about licensed human therapists versus hacking.
Now you’re bringing up a new term human hacking
Maybe if you have so much free time you should Google what you say before you say it.
Kaz_55 t1_jccod90 wrote
Reply to comment by InevitableGrand956 in 'Highly Maneuverable' UFOs Defy All Physics, Says Government Study by Gari_305
See
https://en.wikipedia.org/wiki/Fermi_paradox
while the chance should be very likely, the evidence we have so far doesn't support this. The question is why. We as a species are currently in the process of answering said question, i.e. rapid decline of our only known biosphere supporting higher forms of life, caused by your cited (rapid) advancement in technology without the ability of dealing with the pitfalls of said technologies.
What you are referring to is known as the Drake equation.
What we are currently experiencing is known as the Great Filter.
Kaz_55 t1_jccn72e wrote
Reply to comment by CrelbowMannschaft in 'Highly Maneuverable' UFOs Defy All Physics, Says Government Study by Gari_305
> The other is a disturbing, frightening possibility. If we have widespread problems with these kinds of sensors[...]
But basically all of these reports are based on "edge of observability" cases. Which is also why the official report cites a lack of high quality data. These problems are inherent with any kind of sensor system and observations made using said system. Not to mention that as far as I am aware most cases aren't even down to sensor data but simply observers. "Overhauling" of these systems will at best shift the edge of observation, but not address the underlaying problem.
Case in point being the outlandish claims by pilots about "physics breaking" manouvers which are not supported by the actual videos being presented as evidence and observers being unable to correctly identify position lights on aircraft, let alone basic optics.
EDIT:
It seems like CrelbowMannschaft put me on their ignore list in order to prevent me from actually replying to this. I wonder why.
The official reports outright decry the lack of high quality data and that the nature of the employed sensor makes them ill-suited for the task they are put to here. They also list sensor malfunctions, observer misperception, airborne clutter as possible explanations. Given that the Pentagon had to admit that the only actual evidence that has been presented - the videos - show exactely that, and that the plethora of other "UFO" reports made by basically everyone aroudn the globe happen to be down to "edge of observability" issues (distance, speed, focus, etc.) and so far all reports ever made tunred out to be exllainable by mundane means when high quality data was available leads me to believe that this is, in fact, inherent to sensor or observer limitations and not in any way "disturbing" of "frightening" - apart from actual military observers (not combat pilots) being unable to identify collision lights.
isleepinahammock t1_jccmmd0 wrote
Reply to comment by ComfortableIntern218 in IVO Ltd. to Launch Quantum Drive Pure Electric Satellite Thruster into Orbit on SpaceX Transporter 8 with partner Rogue Space Systems by ComfortableIntern218
It's claimed not to expel anything, including electrons. IIRC, it's based on some theories of quantized inertia, and apparently that can be harnessed somehow to create a reaction less drive. I'm skeptical, but I say, go for it if you think it will work.
Dry_Rip5135 t1_jcclf6p wrote
Reply to comment by Jorsonner in What are some jobs that AI cannot take? by Draconic_Flame
You’re correct at the moment, all those jobs will be obsolete eventually
chill633 t1_jcchtfq wrote
Reply to comment by Shadowkiller00 in What are some jobs that AI cannot take? by Draconic_Flame
Isn't that the group Microsoft laid off a couple of days ago? I mean technically if they don't replace them with an AI that's an answer to the question posed, but there's always the "we're eliminating that position" option.
Coachtzu t1_jcdxi2h wrote
Reply to comment by Codydw12 in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
You're cherry picking. He addresses this in the article. We can't afford to be left behind, yet we also don't understand what we are racing towards.
Automation has also already cost jobs. It will cost more. This is not controversial. We need to figure out how we adapt to a world where our work does not and should not define us.