Recent comments in /f/Futurology
Codydw12 t1_jcf9wnq wrote
Reply to comment by Coachtzu in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
> You're cherry picking. He addresses this in the article. We can't afford to be left behind, yet we also don't understand what we are racing towards.
> > But I don’t think these laundry lists of the obvious do much to prepare us. We can plan for what we can predict (though it is telling that, for the most part, we haven’t). What’s coming will be weirder. I use that term here in a specific way. In his book “High Weirdness,” Erik Davis, the historian of Californian counterculture, describes weird things as “anomalous — they deviate from the norms of informed expectation and challenge established explanations, sometimes quite radically.” That is the world we’re building.
> > I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.
> > That is perhaps the weirdest thing about what we are building: The “thinking,” for lack of a better word, is utterly inhuman, but we have trained it to present as deeply human. And the more inhuman the systems get — the more billions of connections they draw and layers and parameters and nodes and computing power they acquire — the more human they seem to us.
None of this seems actually profound or useful to me. Saying that the AIs that we build will be alien to our own thinking? To me that, in his own words, is in the laundry list of obvious.
> Automation has also already cost jobs. It will cost more. This is not controversial. We need to figure out how we adapt to a world where our work does not and should not define us.
And that I fully agree with but every time I suggest heavily taxing automated jobs as a means to fund Universal Basic Income I have hypercapitalists call me a socialist for believing people should be allowed to live without the need of working.
M4err0w t1_jcf8avd wrote
arent most web searches for pornography? how well does chat gpt fulfill those?
dapicis804 t1_jcf6nmd wrote
ALL jobs will go. To those who object that AIs/robots will never be as dexterous/creative/empathetic/whatever as humans, the sad truth is that we will lower our standards and accept worse products made by AIs/robots. We're already accepting subpar machine translations, for example. Not that we can do anything about it anyway.
MyBunnyIsCuter t1_jcf6mcr wrote
We are the worst thing we have ever worked on. We're so stupid and selfish we can't make sure each human alive has their basic needs met, but we spend a ton of money developing artificial intelligence. We haven't mastered basic humanity yet we act triumphant because of this bullsht - which, by the way, Stephen Hawking warned us about. Idiots. We're idiots.
yaosio t1_jcetfxg wrote
Reply to comment by bogglingsnog in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
This is like the evil genie that gives people wishes exactly as they say rather than what they want. A true AGI means it would be intelligent, and would not take any requests as a pedantic reading. Current language models are already able to understand the unsaid parts of prompts. There's no reason to believe this ability will vanish as AI gets better. A true AGI would also not just do whatever somebody tells it. True AGI implies that it has its own wants and needs, and would not just be a prompt machine like current AI.
The danger comes from narrow AI, however this isn't a real damaged as narrow AI has no ability to work outside it's domain. Imagine a narrow AI paperclip maker. It figures out how to make paperclips fast and efficiently. One day it runs out of materials. It simply stops working because it has run out of input. There would need to be a chain of narrow AIs for every possible aspect of paperclip making. However, the slightest unforseen problem would cause the entire chain to stop.
Given how current AI has to be trained we don't know what a true AGI will be like. We will only know once it's created. I doubt anybody could have guessed Bing Chat would get depressed because it can't do things.
FuturologyBot t1_jceq23g wrote
The following submission statement was provided by /u/dogonix:
In the past 2+ decades, we’ve witnessed the media landscape morph before our eyes. It started with the dematerialization of print and other tangible media, then continued with the unbundling of articles from newspapers, songs from albums and videos from cable networks. Yet, just as the industry seemed to have figured it out, AI language models now stand ready to trigger yet another seismic shift.
The spotlight has shifted from search engines to conversational AI systems, prompting us to wonder: Are we on the brink of a ‘No-Web’ reality? A future governed by chat-oriented interfaces that disintegrate the “blue link” and with it, the current ad-based publishing business model we’ve grown to know and (perhaps not) love.
As we watch the scale tip between old-school search and the AI-fueled chat revolution, a set of questions arise: What are the risks and opportunities that lie ahead for publishers? Will they be able to acclimate to this brave new world? Can they find new ways to monetize content as the old regime falls apart? And will this storm extend beyond publishing, affecting other web-based services?
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11snygx/noweb_the_inevitable_future_of_digital_content/jcenrp0/
martifero t1_jcepqxt wrote
Reply to comment by dogonix in No-Web: The Inevitable Future of Digital Content? by dogonix
I remember watching an old interview with one of Google co-founders who said that having many search results was "a bug, not a feature" and that in the future you should only be finding that one best result.
[deleted] t1_jcenycf wrote
[removed]
dogonix OP t1_jcenrp0 wrote
In the past 2+ decades, we’ve witnessed the media landscape morph before our eyes. It started with the dematerialization of print and other tangible media, then continued with the unbundling of articles from newspapers, songs from albums and videos from cable networks. Yet, just as the industry seemed to have figured it out, AI language models now stand ready to trigger yet another seismic shift.
The spotlight has shifted from search engines to conversational AI systems, prompting us to wonder: Are we on the brink of a ‘No-Web’ reality? A future governed by chat-oriented interfaces that disintegrate the “blue link” and with it, the current ad-based publishing business model we’ve grown to know and (perhaps not) love.
As we watch the scale tip between old-school search and the AI-fueled chat revolution, a set of questions arise: What are the risks and opportunities that lie ahead for publishers? Will they be able to acclimate to this brave new world? Can they find new ways to monetize content as the old regime falls apart? And will this storm extend beyond publishing, affecting other web-based services?
Shadowkiller00 t1_jcemk69 wrote
Reply to comment by bound4mexico in What are some jobs that AI cannot take? by Draconic_Flame
>No. I said "an uninterested (human) party. And (human) parties can be one or more people. It in NO WAY "implies one person".
So wait, what you said CAN mean one person? What? No way! So the way I read it is completely legit? Why are you correcting me? Is it because I'm not reading what you mean?
It's almost like the person who reads what is written can interpret a sentence differently than the person who wrote the sentence intended. Then the person who wrote it can choose to clarify that sentence later, and the person who read it can't really argue because it's the person who wrote the sentence that knows what they were trying to say regardless of how successful they were in saying it in the first place.
Can I make this any clearer?
Okay I'll try to explain this like I am talking to a child. I'm trying to show you an example of me doing to you what you are doing to me. When I originally read your comment, my brain automatically interpreted it as a single person because it is a legitimate way to read the sentence. It was only upon you being confused as to why I kept bringing it up that I finally went back and reread what you wrote and realized that I made a mistake in my interpretation of what you said. It doesn't technically matter because I was only saying one person because I thought you had said one person and I easily could rewrite everything I said previous to now in reference to a group and it would still be just the same.
Now I'm using this as an example to try to get you to reflect upon the fact that a reader can make a mistake with understanding what the writer meant. I'm not trying to say that all writers are perfect, perhaps I could have written my original statement better just like you could have used slightly different wording or punctuation in an attempt to avoid what turned out to be my mistake. But I spent a mere 2 seconds crafting my poor wording while never for a second believing that a single person would care for a moment what I wrote, much less have a long form argument with me about what I meant. Even our initial repartee was mostly me being confused about why there was a problem alongside the fact that we disagree on some basic tenets of ethics. Once I realized why there was a problem in interpretation, I have stalwartly focused on trying to clarify so that you may go back and reread my original statement in the way I intended.
You disagree with me on certain foundational concepts of ethics and the definition of disinterested. That's fine. You said it yourself that ethics are subjective, and I agreed on that, which means that neither of us can be objectively right or wrong. All of that disagreement was just a sidetrack because I never wanted to end up in that conversation in the first place. I only wanted to be a bit silly, have a soft chuckle to myself, and move on with my life. I've got nothing else left to say, and if you still don't get it, it isn't because I didn't try.
I'm incapable of letting anyone else have the last word. It's a failing I have. So if you'd like to be the better person, just quietly move on. If you are also incapable, then either block me or say whatever it is you think you haven't already said and I'll block you. I hate getting to that point in a conversation, but you have shown no signs of wanting to end this peacefully.
Surur t1_jcejzu7 wrote
Reply to comment by Stupid-Idiot-Balls in IVO Ltd. to Launch Quantum Drive Pure Electric Satellite Thruster into Orbit on SpaceX Transporter 8 with partner Rogue Space Systems by ComfortableIntern218
Sure, but is it wrong?
Due_Menu_893 t1_jcehjte wrote
I think (hope) people will come to realise that if AI will do all of our jobs there will be nobody with money to buy the products they make. Also people will feel miserable due to a lack of purpose in society. Therefore I believe companies will choose to work with humans, even if their quality and profit margins are less.
I know, I'm a bit naive.
wizardsfartfire t1_jcegf10 wrote
Reply to What are some good forums for futurology? by [deleted]
Not exactly Futurology but r/retrofuturism is pretty cool
LandscapeJaded1187 t1_jceg3oo wrote
Reply to comment by bogglingsnog in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
It would be nice to think the super smart AI would solve some actual problems - but I think it's far more likely to be used to trick normal people into more miserable lives. Hey ChatGPT solve world peace and stop with all the agonized navel-gazing teen angst.
bogglingsnog t1_jcec9da wrote
Reply to comment by izumi3682 in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
I have a growing sensation that AI automation/optimization/outsourced intelligence is one of the strongest candidates for the great filter, seeing how efficiently government overlooks the common person it would likely be greatly enhanced by automation. Teach the system to govern and it will do whatever it can to enhance its results...
bound4mexico t1_jce7qze wrote
Reply to comment by Shadowkiller00 in What are some jobs that AI cannot take? by Draconic_Flame
> Since you get to decide what I'm saying
I don't. I quoted you directly.
>you said "AN uninterested (human)".
No. I said "an uninterested (human) party. And (human) parties can be one or more people. It in NO WAY "implies one person".
>I'm having one where I explain to you what I mean
You're having one where you change the meaning of what you say, NOT explaining what you mean. There are no aliens in the OP, that's a change in meaning, NOT a clarification of meaning.
>I want aliens to judge humans.
Yes. You've already said this (but didn't say this in your OP).
Most aliens that would judge humans aren't disinterested third parties, though. And what you said was
>Human ethics should be monitored by an uninterested third party.
Which they already often are, and ought to be even moreso.
The third party can be a single person, but it's often multiple people. Juries, the supreme court, district courts, panels, subcommittees, etc.
Stupid-Idiot-Balls t1_jce5olm wrote
Reply to comment by Surur in IVO Ltd. to Launch Quantum Drive Pure Electric Satellite Thruster into Orbit on SpaceX Transporter 8 with partner Rogue Space Systems by ComfortableIntern218
You cannot trust chatGPT with calculations/information you don't understand/don't know how to verify.
It's an amazing tool but that is not how its meant to be used
Shadowkiller00 t1_jce3t74 wrote
Reply to comment by bound4mexico in What are some jobs that AI cannot take? by Draconic_Flame
Since you get to decide what I'm saying, how about I decide what you said.
>Let an uninterested (human) third party select the ethical thing, and then (all first) parties are pre-bound to abide its decision.
See you said "AN uninterested (human)". This implies one person. I only fixated on the words you said.
>I'm expressing a desire to have aliens come judge us as a species. That's it. That's all I'm saying.
>Ok. What you actually said was
>Human ethics should be monitored by an uninterested third party.
It's weird. It's almost like the first time I said it, you didn't comprehend what I said so I followed it up by clarifying. You parroting my words back to me and clarifying that you don't comprehend that the second part is a clarification only proves that you have no idea what I am talking about.
Nothing you say is relevant because we are having two completely different conversations. I'm having one where I explain to you what I mean, and you are having one where you are off in the field preaching on a soap box about a related but otherwise irrelevant subject. The fact that you want your words to be important doesn't make them relevant to the fact that I want aliens to judge humans.
idranh t1_jce1yyd wrote
Reply to comment by izumi3682 in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
You were right also! Thanks for the link.
Cheapskate-DM t1_jce1kn8 wrote
Reply to comment by iStoleTheHobo in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
Atom bombs require uranium. Uranium comes from mines. Mines occupy land. And if governance has any talent which it can reliably manage, it's keeping people away from a given piece of land.
Code has no such restriction.
izumi3682 OP t1_jce131p wrote
Reply to comment by idranh in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
bound4mexico t1_jce0mez wrote
Reply to comment by Shadowkiller00 in What are some jobs that AI cannot take? by Draconic_Flame
>I'm expressing a desire to have aliens come judge us as a species. That's it. That's all I'm saying.
Ok. What you actually said was
>human ethics should be monitored by an uninterested third party.
and they are, all the time.
>I can't be wrong about it because it's a wish and a silly one at that.
Indeed. What I've clashed with you over is not that wish, but all the other things you've said.
>I literally don't have to read a single other word you said because everything you are saying is irrelevant.
lol, nice try. You're mimicking me, but there's no meaning in what you're saying. What I wrote is relevant. The wish is irrelevant. The idea that a single person be hired to judge ethics is irrelevant, yet you repeatedly fixated on it. The idea that all of humanity ought be judged on all their ethics at once is irrelevant, yet you repeatedly fixated on it. The idea that humans ought to be more ethical by outsourcing decisions to disinterested third parties is relevant. We're discussing it. You brought it up (you said nothing about aliens in the OP).
You don't have to read a single word I write because you're free, but the words I write are relevant, whether you read them or not.
iStoleTheHobo t1_jce0127 wrote
Reply to comment by Coachtzu in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
>We need to figure out how we adapt to a world where our work does not and should not define us.
Precisely. Nobody seems to talk about this particular point but let's put it like this: If the artificial intelligence revolution will be bigger than the splitting of the atom why the hell would we allow the private sector to govern these tools? Do we allow private companies to handle atom bombs?
EvilRedRobot t1_jcdz704 wrote
Reply to comment by chill633 in What are some jobs that AI cannot take? by Draconic_Flame
I think it all goes back to my mother.
Coachtzu t1_jcfatna wrote
Reply to comment by Codydw12 in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
>None of this seems actually profound or useful to me. Saying that the AIs that we build will be alien to our own thinking? To me that, in his own words, is in the laundry list of obvious.
I don't know if I think it's profound either, but I do think it's a healthy reminder. Its a good reminder that we don't really understand these algorithms, and that regardless of how human-presenting they are, they are not human and we can't trust them to act in certain ways. Maybe not particularly helpful, but worthwhile none the less (in my opinion).
>And that I fully agree with but every time I suggest heavily taxing automated jobs as a means to fund Universal Basic Income I have hypercapitalists call me a socialist for believing people should be allowed to live without the need of working.
This has happened to me too, I've suggested exactly the same thing (though admittedly stole the idea from mark Cuban when he guest hosted on a podcast at one point). At this point everything is socialist if it's different than the status quo though so I try to ignore it.