Recent comments in /f/Futurology

m0estash t1_j9gzs86 wrote

I’ve gotten in to this topic a fair bit in the past. There’s a great channel on YouTube from Robert miles (title of channel is his name) that goes deep into this topic. Essentially if we want to survive a general AI then we have to motivate it fundamentally to have the same values as we do. If we treat it like a tool then in all likelihood we will get all of the unintended consequences of getting EXACTLY what we asked of the AI.

1

Vucea OP t1_j9gzjyx wrote

One side effect of unlimited content-creation machines—generative AI—is unlimited content.

On Monday, the editor of the renowned sci-fi publication Clarkesworld Magazine announced that he had temporarily closed story submissions due to a massive increase in machine-generated stories sent to the publication.

In a graph shared on Twitter, Clarkesworld editor Neil Clarke tallied the number of banned writers submitting plagiarized or machine-generated stories.

The numbers totaled 500 in February, up from just over 100 in January and a low baseline of around 25 in October 2022.

The rise in banned submissions roughly coincides with the release of ChatGPT on November 30, 2022.

41

chopchopped OP t1_j9gz0hw wrote

SS- A New Jersey Resident built a solar hydrogen home in 2006, which was the subject of a Scientific American article called "Inside the Solar-Hydrogen House: No More Power Bills--Ever" https://www.scientificamerican.com/article/hydrogen-house/ - great to see others integrating this tech - 17 years later.
http://hydrogenhouseproject.org

Edit: Apparently Mike's website is temporarily unavailable [...] - here's an archive view
http://web.archive.org/web/20220202063706/https://www.hydrogenhouseproject.org/index.html

2

69inthe619 t1_j9gwo6j wrote

extraordinary claims require extraordinary evidence, the burden is on you to provide any evidence whatsoever that the only requirement for being able to feel and express emotions, both tangible and intangible, love or pain or both simultaneously, is a sensor and raw data. by your logic, the only thing the world needs to permanently solve all depression everywhere is raw data on happy because we already have senses, and your logic says those are exactly equal to sensors. e=mc2. if you can turn mass into energy, the opposite is also true, you can turn energy into mass. that is what “=“ means.

1

ThaCURSR t1_j9gstxn wrote

According to chat GPT itself these are the things that need to be ensured in order to protect mankind Transparency: The AI must be transparent in its decision-making process and the logic behind it should be easy to understand. Accountability: The AI must be accountable for its actions, and the creators or owners of the AI should be held responsible for any harm caused by its actions. Safety: The AI must be designed to prioritize human safety, even if it means sacrificing its own goals or objectives. Ethical framework: The AI must be designed with a clear ethical framework that is aligned with human values. Regulation: There should be clear regulations and standards for the development and deployment of AI, to ensure that it is developed in a responsible and safe manner. Data privacy: The AI should be designed to protect the privacy of human data, and any data collected should be used only for the intended purpose. Human oversight: The AI should be designed to operate under human oversight, and humans should be able to intervene and correct any harmful actions taken by the AI. Overall, it is essential to design AI systems with the goal of benefiting humankind, while also ensuring that they operate in a safe and ethical manner. To achieve this, a collaborative effort is required from all stakeholders involved in the development and deployment of AI, including researchers, developers, policymakers, and the general public

2

Ch1Guy t1_j9gsgti wrote

So minimum wage for large portions of America are at near the highest levels in the past 50 years.

Healthcare coverage is also at some of the highest levels for the past 50 years.

Median household income adjusted for inflation is about the highest it has been in the past 50 years. (if anyone is interested: https://www.pewresearch.org/social-trends/2020/01/09/trends-in-income-and-wealth-inequality/psdt_01-10-20_economic-inequality_1-0/ )

Unemployment is about the lowest it has been over the past 50 years.

People are doing well.

The problem is wealth inequality. Others are doing better and we are not all sharing in the growth of wealth.

1

Gene_Smith t1_j9grouz wrote

Literally right now, but it only works for future generations and you have to do IVF to get the benefits.

The basic gist is this: maybe you or your spouse has Schizophrenia, or you have a family member that does (so your child probably has a higher-than-average risk). You can go through IVF and lower your child's risk of getting the disease by selecting an embryo that has a lower polygenic risk score for the disease.

If you and your spouse have 5 embryos to pick from, you can probably lower your child's risk of Schizophrenia by about 30%. If you have more embryos, the reduction will be greater. Perhaps up to 50%.

I only know of one company currently offering this service commercially: Genomic Prediction. Their website is pretty sparse, but I know from prior research that Schizophrenia is one of the conditions they test for.

1

Bezbozny t1_j9grf02 wrote

Emotion is hard to define because each emotion includes a countless plethora of different events that happen inside the body due to the specific emotion. could we make a robot that sweats when its scared? whose blood pressure goes up when it gets angry? etc...

I don't think we could "create" these things, but that they will be "emergent" and will appear as we give AI's more and more memory, and give them bodies with higher and higher fidelity artificial sensory organs and control over its own body.

Ultimately emotions are just the most logical response to most given situations an individual with an evolved mind can encounter, without having to think about it. For instance, what is the more logical response to seeing a predator? Writing a 9 page thesis in your mind on why you should run away? or having your heart/engine kick into high gear and instinctually run away very fast?

The robot will start out by logicing out every question it has, but eventually it will start to notice that certain scenarios are more efficient to use canned responses on, and it will just execute certain functions that not only cause it's body to move in a certain way (potentially including artificial facial muscles which could display emotion), but also cause it's various artificial organs (sensory, muscular, circulatory, or the artificial equivalent to these things) to slow-down, speed up, halt as needed for the particular scenario it finds itself in.

I think our current neural networks are close to being able to do this on a software side, but our hardware might still be lacking.

3