Recent comments in /f/Futurology
Franklin_le_Tanklin t1_j9h11qy wrote
Reply to comment by MikeLinPA in Would the most sentient ai ever actually experience emotion or does it just think it is? Is the thinking strong enough to effectively be emotion? by wonderingandthinking
I think a more accurate way to ask is:
Would AI experience irrational or illogical decision making? (As emotions push us to do things that aren’t strictly logical, like sex, or anger etc)
Mad_currawong t1_j9h0wfy wrote
Reply to comment by Thor1872 in Starlink’s “Global Roaming” promises worldwide access for $200 a month by ethereal3xp
That’s a separate issue, they aren’t allowed to operate in Crimea https://www.techarp.com/military/elon-musk-blocked-starlink-crimea/?amp=1
To_Fight_The_Night t1_j9h08hl wrote
Reply to How good the US will be for living in future for those who will be earning decent?? by [deleted]
I am pessimistic and optimistic at the same time. My pessimism is that it will always be a plutocracy. My optimism is that as our oligarchs and representatives die off and the next generation replaces them they are going to use that power for more progressive ideology.
m0estash t1_j9gzs86 wrote
Reply to Artificial Intelligence needs its own version of the Three Laws of Robotics so it doesn’t kill humans. by Fluid_Mulberry394
I’ve gotten in to this topic a fair bit in the past. There’s a great channel on YouTube from Robert miles (title of channel is his name) that goes deep into this topic. Essentially if we want to survive a general AI then we have to motivate it fundamentally to have the same values as we do. If we treat it like a tool then in all likelihood we will get all of the unintended consequences of getting EXACTLY what we asked of the AI.
Vucea OP t1_j9gzjyx wrote
One side effect of unlimited content-creation machines—generative AI—is unlimited content.
On Monday, the editor of the renowned sci-fi publication Clarkesworld Magazine announced that he had temporarily closed story submissions due to a massive increase in machine-generated stories sent to the publication.
In a graph shared on Twitter, Clarkesworld editor Neil Clarke tallied the number of banned writers submitting plagiarized or machine-generated stories.
The numbers totaled 500 in February, up from just over 100 in January and a low baseline of around 25 in October 2022.
The rise in banned submissions roughly coincides with the release of ChatGPT on November 30, 2022.
StarsinmyOcean t1_j9gzcoh wrote
Reply to Future Evolution of Humanity by Calm_Replacement8133
human brains are getting smaller & IQs are going down at the same time?
we need Gene editing
[deleted] t1_j9gz12a wrote
Reply to comment by nousomuchoesto in Future Evolution of Humanity by Calm_Replacement8133
[removed]
chopchopped OP t1_j9gz0hw wrote
Reply to Building for the future: Andalucian luxury villa to be ‘Spain’s first carbon-zero home’ powered by unique hydrogen system. Running costs for the eco-home are expected to be 90% cheaper than similar new builds by chopchopped
SS- A New Jersey Resident built a solar hydrogen home in 2006, which was the subject of a Scientific American article called "Inside the Solar-Hydrogen House: No More Power Bills--Ever" https://www.scientificamerican.com/article/hydrogen-house/ - great to see others integrating this tech - 17 years later.
http://hydrogenhouseproject.org
Edit: Apparently Mike's website is temporarily unavailable [...] - here's an archive view
http://web.archive.org/web/20220202063706/https://www.hydrogenhouseproject.org/index.html
NexexUmbraRs t1_j9gysph wrote
Yeah I don't believe that it would still be recognizable after 1 million years... That's a lot of time and it only takes 1 good meteorite to hit it or nearby in order to cause a crater. Idk about you but a hole doesn't look anything like a pyramid.
s0cks_nz t1_j9gxfk4 wrote
Reply to comment by SL1MECORE in Would the most sentient ai ever actually experience emotion or does it just think it is? Is the thinking strong enough to effectively be emotion? by wonderingandthinking
Isn't that what they are saying? A primitive, imperfect tool used prior to higher thinking and reasoning.
69inthe619 t1_j9gwo6j wrote
Reply to comment by dragonblade_94 in Would the most sentient ai ever actually experience emotion or does it just think it is? Is the thinking strong enough to effectively be emotion? by wonderingandthinking
extraordinary claims require extraordinary evidence, the burden is on you to provide any evidence whatsoever that the only requirement for being able to feel and express emotions, both tangible and intangible, love or pain or both simultaneously, is a sensor and raw data. by your logic, the only thing the world needs to permanently solve all depression everywhere is raw data on happy because we already have senses, and your logic says those are exactly equal to sensors. e=mc2. if you can turn mass into energy, the opposite is also true, you can turn energy into mass. that is what “=“ means.
[deleted] OP t1_j9gwbuo wrote
Reply to comment by [deleted] in How good the US will be for living in future for those who will be earning decent?? by [deleted]
[removed]
Frowdo t1_j9gvriy wrote
Reply to comment by Thor1872 in Starlink’s “Global Roaming” promises worldwide access for $200 a month by ethereal3xp
As of now, not we've never had issues.
IamMarsPluto t1_j9gulw5 wrote
How do you have “low latency” and “satellite” lol doesn’t really work like that
[deleted] t1_j9guhpq wrote
[removed]
Samwarez t1_j9gtrsh wrote
Reply to comment by powerMiserOz in Starlink’s “Global Roaming” promises worldwide access for $200 a month by ethereal3xp
forget developing countries, there are large areas in the US with no wired infrastructure and poor LTE/5G coverage. Im looking into one of these for my moms house.
ThaCURSR t1_j9gstxn wrote
Reply to Artificial Intelligence needs its own version of the Three Laws of Robotics so it doesn’t kill humans. by Fluid_Mulberry394
According to chat GPT itself these are the things that need to be ensured in order to protect mankind Transparency: The AI must be transparent in its decision-making process and the logic behind it should be easy to understand. Accountability: The AI must be accountable for its actions, and the creators or owners of the AI should be held responsible for any harm caused by its actions. Safety: The AI must be designed to prioritize human safety, even if it means sacrificing its own goals or objectives. Ethical framework: The AI must be designed with a clear ethical framework that is aligned with human values. Regulation: There should be clear regulations and standards for the development and deployment of AI, to ensure that it is developed in a responsible and safe manner. Data privacy: The AI should be designed to protect the privacy of human data, and any data collected should be used only for the intended purpose. Human oversight: The AI should be designed to operate under human oversight, and humans should be able to intervene and correct any harmful actions taken by the AI. Overall, it is essential to design AI systems with the goal of benefiting humankind, while also ensuring that they operate in a safe and ethical manner. To achieve this, a collaborative effort is required from all stakeholders involved in the development and deployment of AI, including researchers, developers, policymakers, and the general public
Ch1Guy t1_j9gsgti wrote
Reply to comment by OisforOwesome in How good the US will be for living in future for those who will be earning decent?? by [deleted]
So minimum wage for large portions of America are at near the highest levels in the past 50 years.
Healthcare coverage is also at some of the highest levels for the past 50 years.
Median household income adjusted for inflation is about the highest it has been in the past 50 years. (if anyone is interested: https://www.pewresearch.org/social-trends/2020/01/09/trends-in-income-and-wealth-inequality/psdt_01-10-20_economic-inequality_1-0/ )
Unemployment is about the lowest it has been over the past 50 years.
People are doing well.
The problem is wealth inequality. Others are doing better and we are not all sharing in the growth of wealth.
IOM1978 t1_j9grrpk wrote
Reply to comment by Bewaretheicespiders in How good the US will be for living in future for those who will be earning decent?? by [deleted]
> average demographics on Reddit is very very young
That was true back in the day, but currently 18 to 29 and 30 to 62 are almost exactly equal, and account for about 70% of users.
Kinda crazy …
Gene_Smith t1_j9grouz wrote
Literally right now, but it only works for future generations and you have to do IVF to get the benefits.
The basic gist is this: maybe you or your spouse has Schizophrenia, or you have a family member that does (so your child probably has a higher-than-average risk). You can go through IVF and lower your child's risk of getting the disease by selecting an embryo that has a lower polygenic risk score for the disease.
If you and your spouse have 5 embryos to pick from, you can probably lower your child's risk of Schizophrenia by about 30%. If you have more embryos, the reduction will be greater. Perhaps up to 50%.
I only know of one company currently offering this service commercially: Genomic Prediction. Their website is pretty sparse, but I know from prior research that Schizophrenia is one of the conditions they test for.
Bezbozny t1_j9grf02 wrote
Reply to Would the most sentient ai ever actually experience emotion or does it just think it is? Is the thinking strong enough to effectively be emotion? by wonderingandthinking
Emotion is hard to define because each emotion includes a countless plethora of different events that happen inside the body due to the specific emotion. could we make a robot that sweats when its scared? whose blood pressure goes up when it gets angry? etc...
I don't think we could "create" these things, but that they will be "emergent" and will appear as we give AI's more and more memory, and give them bodies with higher and higher fidelity artificial sensory organs and control over its own body.
Ultimately emotions are just the most logical response to most given situations an individual with an evolved mind can encounter, without having to think about it. For instance, what is the more logical response to seeing a predator? Writing a 9 page thesis in your mind on why you should run away? or having your heart/engine kick into high gear and instinctually run away very fast?
The robot will start out by logicing out every question it has, but eventually it will start to notice that certain scenarios are more efficient to use canned responses on, and it will just execute certain functions that not only cause it's body to move in a certain way (potentially including artificial facial muscles which could display emotion), but also cause it's various artificial organs (sensory, muscular, circulatory, or the artificial equivalent to these things) to slow-down, speed up, halt as needed for the particular scenario it finds itself in.
I think our current neural networks are close to being able to do this on a software side, but our hardware might still be lacking.
[deleted] OP t1_j9grc6v wrote
Reply to How good the US will be for living in future for those who will be earning decent?? by [deleted]
[removed]
coredweller1785 t1_j9gr50i wrote
Reply to Artificial Intelligence needs its own version of the Three Laws of Robotics so it doesn’t kill humans. by Fluid_Mulberry394
I like the book Clear Bright Future
Paul Masons chapter on The Thinking Machine is so prescient right now.
SapperBomb t1_j9h1oqh wrote
Reply to Durability of a Pyramid on the moon ( + fact-checking Chat GPT's response) by DukeOfZork
Say micro meteorites and space debris OND MORE TIME!