Recent comments in /f/Futurology
CollegeIntellect t1_jbyif9a wrote
Reply to comment by TheCrimsonSteel in Scientists call for global action to clean up space junk by thebelsnickle1991
Tiny stuff is incredibly difficult to track and deorbit. There was one suggestion of using a reinforced solar sail or even aerogel to capture the tiny debris in orbit. It is incredibly difficult to go out and just say you’re going to clean up LEO of all paint chips. It is much easier to add reenforcement to your existing hardware or using debris agnostic designs.
NASA and DoD tracks everything larger than about 2 inches. If the object is greater than 4inches they plan on object avoidance maneuvers, otherwise they let ballistic shields handle anything smaller.
There are plenty of dead satellites out there in LEO that will eventually make their way back to earth from drag. All of those missions are grandfathered in to the old 25 year rule.
In real life, equatorial orbits aren’t very interesting. Most commonly satellites sit in sun synchronous orbits. This gives you the same angle of the sun to power your panels and observe the earth with the same lighting year round. It works because the earth is oblate and slightly causes the orbit to precess around the planet. Inclination is about 98 degrees.
Due to the limitations of kerbal, it’s not possible to get into that orbit. Gamers would be pretty mad if the game started taking into account orbital precession causing their orbits to change around the planet as they time warp through the year.
Source: https://www.hindawi.com/journals/amse/2013/484153/
Source: https://www.nasa.gov/mission_pages/station/news/orbital_debris.html
Source: https://space.stackexchange.com/questions/33704/distribution-of-satellites-by-inclination
Surur t1_jbyif2t wrote
Reply to comment by Jasrek in ChatGPT or similar AI as a confidant for teenagers by demauroy
> If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.
My concern is around children. A disclaimer would not help.
> If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.
I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.
Jasrek t1_jbyi94y wrote
Reply to comment by Surur in ChatGPT or similar AI as a confidant for teenagers by demauroy
It's two tweets down in the same thread by the same guy. Did you finish reading what you linked?
In my experience, ChatGPT very blatantly presents itself as a computer program. I've asked it to invent a fictional race for DND and it prefaced the answer by reminding me it was a computer program and has no actual experience with orcs.
If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.
If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.
JustAvi2000 t1_jbyhftr wrote
Anyone seen the movie "M3GAN"? Because that's what this is sounding like. Did not turn out well for the humans involved.
peadith t1_jbyhbzh wrote
Reply to comment by Taxoro in ChatGPT or similar AI as a confidant for teenagers by demauroy
Lots people think this thing is still a command line joke greeter. Yer in fer a sprize.
Surur t1_jbyh7ee wrote
Reply to comment by Jasrek in ChatGPT or similar AI as a confidant for teenagers by demauroy
Why do you keep talking about hiding a bruise? The tweet is about a 13-year-old child being abducted for out-of-state sex by a 30-year-old.
The issue is that a while ChatGPT may present as an adult, a real adult would have an obligation to make a report, especially if presented in a professional capacity (working for Microsoft or Snap for example).
I have no issue with ChatGPT working as a counsellor, but it will have to show appropriate professional judgement first, because, unlike a random friend or web page, if does represent Microsoft and OpenAI, including morally and legally.
KeaboUltra t1_jbyh4qd wrote
Assuming that we do get UBI, I think not. People will always want to fill their time with something, and work would become a choice. After the world is no longer in denial about AI taking every job. work will probably become more social and volunteer oriented. I feel like people might want to employ gig style work for more simple things. It's honestly hard to say what people will be doing as an AI or something might be able to handle it better and more efficiently, but off the top of my head, I can imagine online human surveys and user input might be valuable, any data that can make something better. Reviews might become more valuable and we could see a world extremely similar to that black mirror episode where people's ratings dictated their social status. Could be possible that being a "decent" or "model" person nets you more UBI, in which you get rated for committing good acts, each rating contributes to a rank and entering a new rank or earning a new star = a higher UBI to fuel more luxury spending. Things such as holding a door open for someone or cleaning up your environment would contribute to these things. I know all this sounds pointless but that'll kinda become our existence during the dawn of a competent AI, we'll have nothing to do that we'll literally have to come up with a new way to carry on in society based off what we currently have.
Key-Bluejay-2000 t1_jbyh3r0 wrote
Chatgpt gets stuff wrong quite often. Either wrong answers and also advice I don’t agree with. I can usually sus it out, but someone younger might not be able to
Jasrek t1_jbygqy1 wrote
Reply to comment by Taxoro in ChatGPT or similar AI as a confidant for teenagers by demauroy
All advice is unchecked. Learning to be critical of advice is a wonderful life lesson for children to learn.
The fact that you're calling it an 'AI' instead of 'sophisticated chat program' is the real issue here, honestly.
Taxoro t1_jbyg2jc wrote
Reply to comment by Jasrek in ChatGPT or similar AI as a confidant for teenagers by demauroy
Yes of course but you have no way of knowing if are getting trash or not unless if are critical of anything you get out.
For a child to get unchecked advice from an ai is ridiculous
Jasrek t1_jbyfvdq wrote
Reply to comment by Surur in ChatGPT or similar AI as a confidant for teenagers by demauroy
Not really, no.
I'm in my late thirties. I have no idea how old you or anyone else on Reddit is. You have given me no background check or safeguarding training. Some people in this thread might be kids, I have no idea.
Kids use each other as confidants. Do you background check the other 12-year olds?
Kids know how to use Google. What is the fundamental difference between going "How do I hide a bruise?" to a chat program and searching it on Google?
I think this is a knee-jerk reaction to an interesting new gadget and that there is literally no solution to the problem you are perceiving.
Consider the issue shown in the Twitter you linked. How would you fix this? Cause the chat program to shut down if you admit your age is under 18? Prevent it from responding to questions about bruises or physical injuries? Give the program a background check?
Surur t1_jbyev3j wrote
Reply to comment by Jasrek in ChatGPT or similar AI as a confidant for teenagers by demauroy
You dont think the lack of awareness of what is appropriate for children is a risk when it comes to an AI as a confidant for a child?
We do a lot to protect children these days (e.g. background checks of anyone that has professional contact with them, appropriate safeguarding training etc) so it is appropriate to be careful with children who may not have good enough judgement.
JoshuaACNewman t1_jbyeso0 wrote
Reply to comment by [deleted] in ChatGPT or similar AI as a confidant for teenagers by demauroy
I don’t understand your comment.
I’m not autistic. Are you saying that therapists should not have some remove from their patients?
EskimoCheeks t1_jbyeh9t wrote
Make a monolithic baseball bat and bolt it to the Canada Arm and knock homerunsbof space junk directly at the sun.
Problem solved!
Surur t1_jbyeden wrote
Reply to comment by demauroy in ChatGPT or similar AI as a confidant for teenagers by demauroy
> It is not ChatGPT.
It is actually. OpenAI has licensed their AI to Snap.
https://www.cnbc.com/2023/02/27/snap-launches-ai-chatbot-powered-by-openais-gpt.html
Surur t1_jbye85e wrote
Reply to comment by Taxoro in ChatGPT or similar AI as a confidant for teenagers by demauroy
> People need to stop thinking chatgpt and any other ai's have actual intelligence or can give proper information or adivce.. they can't.
And yet you would lose against a $20 chess computer, so when you said "any other AI" you clearly did not mean a $20 chess computer.
JoshuaACNewman t1_jbye0t1 wrote
Reply to comment by demauroy in ChatGPT or similar AI as a confidant for teenagers by demauroy
If most of the time it’s advice that’s at least as good as therapeutic advice, and then it recommends self-harm because it’s what people do, it’s obviously not good for therapeutic purposes.
It’s not that even therapists don’t fuck up. It’s that AI doesn’t have any of the social structure we have. It can’t empathize because it has neither feelings nor experience, which means any personality you perceive is one you’re constructing in your mind.
We have ways and reasons to trust each other that AI can activate, but the signals are false.
TheCrimsonSteel t1_jbydo8k wrote
Reply to comment by CollegeIntellect in Scientists call for global action to clean up space junk by thebelsnickle1991
Curious to pick a proper engineers brain as my main experience is just the game Kerbal Space Program
We'd have to deorbit existing junk, and maybe somehow figure out a way to eliminate the tiny junk? All the paint chips and screws and wrenches and things?
And I'm guessing most of the junk is also in the most commonly used orbits? All farirly equatorial (right term?) stuff in Low Earth Orbit?
All my knowledge of why everything's near the equator is from having to recheck my dV when I take a regual rocket setup and try to use it for a polar orbit, especially when I made my design as lean as possible
Jasrek t1_jbycit8 wrote
Reply to comment by MagicHamsta in Scientists call for global action to clean up space junk by thebelsnickle1991
It rapidly disperses in the sense that the gas scatters too much to actually influence anything's orbit. For you to use it to clear out any swath of space, you would need an enormous amount of gas released at a high velocity. And even then, since all the debris is constantly moving, it's not like that area is now 'clear'. You just have slightly less debris at that orbital inclination. Or slightly more debris, depending on the design of your gas bomb.
Zaflis t1_jbyceui wrote
Reply to comment by SpiritualTwo5256 in Researchers Say They Managed to Pull Quantum Energy From a Vacuum by Woke_Soul
Stargate is using ZPM (Zero Point Module), in that scifi it is a chargeable battery.
petrichoring t1_jbyc42i wrote
As a therapist who works with teens, I feel concerned about a teen talking to an AI about their problems given the lack of crisis assessment and constructive support.
“Adult confidants” need to have safeguards in place like mandated reporting, mental health first aid, and a basic ethics code to avoid the potential for harm (see the abuse perpetrated by the Catholic Church).
If a teen is needing adult support, a licensed therapist is a great option. The Youth Line is a good alternative as well for teens who are looking for immediate support as there are of course barriers to seeing a therapist in a moment of need. Most schools now have a school social worker who can provide interim support while a teen is getting connected to resources.
Edit: also wanted to highlight the importance of having a non-familial adult support. Having at least two non-parent adults who take a genuine interest in a child’s well-being is one of the Positive Childhood Experiences that promote resilience and mitigate the damage caused by Adverse Childhood Experiences. A chatbot is wildly inadequate to provide the kind of support I think you’re referring to, OP. It might offer some basic degree of attunement and responsiveness, but the connection is absent. Neurobiologically, we have specific neural systems that are responsible for social bonds to another human, and I suspect even the most advanced AI would fall short in sufficiently activating these networks. Rather than using AI to address the lack of support and connection that many, if not most, teens experience in their lives, we should be finding ways to increase safe, accessible, organic human support.
TheCrimsonSteel t1_jbybzfz wrote
Reply to comment by MIBlackburn in Scientists call for global action to clean up space junk by thebelsnickle1991
Came looking for this comment, great hard sci-fi show
demauroy OP t1_jbybw01 wrote
Reply to comment by Jasrek in ChatGPT or similar AI as a confidant for teenagers by demauroy
I think it is important to find the right balance, I kind of understand ChatGPT has safety features not to explain to children how to make explosive with detergent at home.
But I would agree with you we may be on the too prudent side right now.
TheCrimsonSteel t1_jbybuyu wrote
Reply to comment by xMetix in Scientists call for global action to clean up space junk by thebelsnickle1991
Mr Beast version of the anime Planetes?
The main characters work as space junkers. Literally salvaging or deorbiting junk because they're basically on the precipice of Kesler Syndrome
Jasrek t1_jbyiwdw wrote
Reply to comment by Surur in ChatGPT or similar AI as a confidant for teenagers by demauroy
>My concern is around children. A disclaimer would not help.
Then I'm still questioning what you think would help. Your suggestions so far have been to imbue a computer program with professional judgement, an understanding of morality and ethics, and safeguarding training.
If you know how to do this, you've already invented AGI.
>I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.
You're more optimistic than I am. My expectation is that there will be a largely symbolic uproar because some kid was able to Google "how do I keep a secret" by using a chat program and nothing of any actual benefit to any children will occur.